Test Report: Docker_Linux_containerd 21550

                    
                      0aba0a8e31d541259ffdeb45c9650281430067b8:2025-09-17:41464
                    
                

Test fail (15/329)

x
+
TestMultiControlPlane/serial/DeployApp (727.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- rollout status deployment/busybox
E0917 00:00:37.165621  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:00:49.959565  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:00:49.965949  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:00:49.977261  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:00:49.998582  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:00:50.039930  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:00:50.121329  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:00:50.282823  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:00:50.604501  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:00:51.246531  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:00:52.528647  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:00:55.091343  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:00.212940  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:04.871873  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:10.454509  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:30.936721  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:02:11.899991  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:03:33.824343  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:05:37.165924  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:05:49.960581  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:06:17.665714  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 kubectl -- rollout status deployment/busybox: exit status 1 (10m6.101476737s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 6 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 0 out of 3 new replicas have been updated...
	Waiting for deployment "busybox" rollout to finish: 0 of 6 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 2 of 3 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:08:44.133128  752707 retry.go:31] will retry after 1.232951173s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:08:45.477139  752707 retry.go:31] will retry after 1.478633877s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:08:47.066267  752707 retry.go:31] will retry after 2.434809372s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:08:49.614564  752707 retry.go:31] will retry after 3.42692877s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:08:53.156516  752707 retry.go:31] will retry after 2.581888882s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:08:55.853873  752707 retry.go:31] will retry after 9.102938056s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:09:05.075104  752707 retry.go:31] will retry after 8.755033071s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:09:13.945883  752707 retry.go:31] will retry after 8.673554633s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:09:22.733937  752707 retry.go:31] will retry after 33.880920566s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:09:56.737430  752707 retry.go:31] will retry after 44.806125277s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
E0917 00:10:37.165575  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- nslookup kubernetes.io: exit status 1 (122.871883ms)

                                                
                                                
** stderr ** 
	error: Internal error occurred: unable to upgrade connection: container not found ("busybox")

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-7b57f96db7-mknzs could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- nslookup kubernetes.default: exit status 1 (120.513792ms)

                                                
                                                
** stderr ** 
	error: Internal error occurred: unable to upgrade connection: container not found ("busybox")

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-7b57f96db7-mknzs could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (119.873118ms)

                                                
                                                
** stderr ** 
	error: Internal error occurred: unable to upgrade connection: container not found ("busybox")

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-7b57f96db7-mknzs could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-472903
helpers_test.go:243: (dbg) docker inspect ha-472903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	        "Created": "2025-09-16T23:56:35.178831158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 804802,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:56:35.209552026Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hostname",
	        "HostsPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hosts",
	        "LogPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047-json.log",
	        "Name": "/ha-472903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-472903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-472903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	                "LowerDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-472903",
	                "Source": "/var/lib/docker/volumes/ha-472903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-472903",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-472903",
	                "name.minikube.sigs.k8s.io": "ha-472903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "abe382ce28757e80b5cdae91a64217d3672b21c23f3517480bd53105aeca147e",
	            "SandboxKey": "/var/run/docker/netns/abe382ce2875",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33544"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33545"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33548"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33546"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33547"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-472903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:42:9f:f6:50:c2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "22d49b2f397dfabc2a3967bd54b05204a52976e683f65ff07bff00e793040bef",
	                    "EndpointID": "4d4d83129a167c8183e8ef58cc6057f613d8d69adf59710ba6c623d1ff2970c6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-472903",
	                        "05f03528ecc5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-472903 -n ha-472903
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-472903 logs -n 25: (1.093630099s)
helpers_test.go:260: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p functional-695580                                                                                                  │ functional-695580 │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ start   │ ha-472903 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd │ ha-472903         │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:58 UTC │
	│ kubectl │ ha-472903 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                      │ ha-472903         │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │ 16 Sep 25 23:58 UTC │
	│ kubectl │ ha-472903 kubectl -- rollout status deployment/busybox                                                                │ ha-472903         │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │                     │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:08 UTC │ 17 Sep 25 00:08 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:08 UTC │ 17 Sep 25 00:08 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:08 UTC │ 17 Sep 25 00:08 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:08 UTC │ 17 Sep 25 00:08 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:08 UTC │ 17 Sep 25 00:08 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:08 UTC │ 17 Sep 25 00:08 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:09 UTC │ 17 Sep 25 00:09 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:09 UTC │ 17 Sep 25 00:09 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:09 UTC │ 17 Sep 25 00:09 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:09 UTC │ 17 Sep 25 00:09 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                 │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- nslookup kubernetes.io                                          │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- nslookup kubernetes.io                                          │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- nslookup kubernetes.io                                          │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │                     │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- nslookup kubernetes.default                                     │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- nslookup kubernetes.default                                     │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- nslookup kubernetes.default                                     │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │                     │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- nslookup kubernetes.default.svc.cluster.local                   │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- nslookup kubernetes.default.svc.cluster.local                   │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- nslookup kubernetes.default.svc.cluster.local                   │ ha-472903         │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:56:30
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:56:30.301112  804231 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:56:30.301322  804231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:30.301330  804231 out.go:374] Setting ErrFile to fd 2...
	I0916 23:56:30.301335  804231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:30.301535  804231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0916 23:56:30.302024  804231 out.go:368] Setting JSON to false
	I0916 23:56:30.302925  804231 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9532,"bootTime":1758057458,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:56:30.303027  804231 start.go:140] virtualization: kvm guest
	I0916 23:56:30.304965  804231 out.go:179] * [ha-472903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:56:30.306181  804231 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:56:30.306189  804231 notify.go:220] Checking for updates...
	I0916 23:56:30.308309  804231 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:56:30.309530  804231 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0916 23:56:30.310577  804231 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0916 23:56:30.311523  804231 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:56:30.312490  804231 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:56:30.313634  804231 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:56:30.336203  804231 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:56:30.336330  804231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:30.390690  804231 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:56:30.380521507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:30.390801  804231 docker.go:318] overlay module found
	I0916 23:56:30.392435  804231 out.go:179] * Using the docker driver based on user configuration
	I0916 23:56:30.393493  804231 start.go:304] selected driver: docker
	I0916 23:56:30.393505  804231 start.go:918] validating driver "docker" against <nil>
	I0916 23:56:30.393517  804231 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:56:30.394092  804231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:30.448140  804231 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:56:30.438500908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:30.448302  804231 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:56:30.448529  804231 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:56:30.450143  804231 out.go:179] * Using Docker driver with root privileges
	I0916 23:56:30.451156  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:30.451216  804231 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 23:56:30.451226  804231 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:56:30.451301  804231 start.go:348] cluster config:
	{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m
0s}
	I0916 23:56:30.452491  804231 out.go:179] * Starting "ha-472903" primary control-plane node in "ha-472903" cluster
	I0916 23:56:30.453469  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:56:30.454617  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:30.455626  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:30.455658  804231 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0916 23:56:30.455669  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:30.455737  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:30.455747  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:30.455875  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:56:30.456208  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:30.456245  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json: {Name:mkb16495f6ef626fa58a9600f3b4a943b5aaf14d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:30.475568  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:30.475587  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:30.475611  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:30.475644  804231 start.go:360] acquireMachinesLock for ha-472903: {Name:mk994658ce3314f2aed1dec341debc49d36a4326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:30.475759  804231 start.go:364] duration metric: took 97.738µs to acquireMachinesLock for "ha-472903"
	I0916 23:56:30.475786  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:30.475881  804231 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:56:30.477680  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:30.477953  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:56:30.477986  804231 client.go:168] LocalClient.Create starting
	I0916 23:56:30.478060  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:56:30.478097  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:30.478118  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:30.478203  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:56:30.478234  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:30.478247  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:30.478706  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:56:30.494743  804231 cli_runner.go:211] docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:56:30.494806  804231 network_create.go:284] running [docker network inspect ha-472903] to gather additional debugging logs...
	I0916 23:56:30.494829  804231 cli_runner.go:164] Run: docker network inspect ha-472903
	W0916 23:56:30.510851  804231 cli_runner.go:211] docker network inspect ha-472903 returned with exit code 1
	I0916 23:56:30.510886  804231 network_create.go:287] error running [docker network inspect ha-472903]: docker network inspect ha-472903: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-472903 not found
	I0916 23:56:30.510919  804231 network_create.go:289] output of [docker network inspect ha-472903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-472903 not found
	
	** /stderr **
	I0916 23:56:30.511007  804231 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:30.527272  804231 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b12870}
	I0916 23:56:30.527312  804231 network_create.go:124] attempt to create docker network ha-472903 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:56:30.527357  804231 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-472903 ha-472903
	I0916 23:56:30.581246  804231 network_create.go:108] docker network ha-472903 192.168.49.0/24 created
	I0916 23:56:30.581278  804231 kic.go:121] calculated static IP "192.168.49.2" for the "ha-472903" container
	I0916 23:56:30.581331  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:30.597113  804231 cli_runner.go:164] Run: docker volume create ha-472903 --label name.minikube.sigs.k8s.io=ha-472903 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:30.614615  804231 oci.go:103] Successfully created a docker volume ha-472903
	I0916 23:56:30.614694  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903 --entrypoint /usr/bin/test -v ha-472903:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:30.983301  804231 oci.go:107] Successfully prepared a docker volume ha-472903
	I0916 23:56:30.983346  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:30.983369  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:30.983457  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:56:35.109877  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.126378793s)
	I0916 23:56:35.109930  804231 kic.go:203] duration metric: took 4.126557088s to extract preloaded images to volume ...
	W0916 23:56:35.110010  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:56:35.110041  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:56:35.110081  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:56:35.162423  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903 --name ha-472903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903 --network ha-472903 --ip 192.168.49.2 --volume ha-472903:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:56:35.411448  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Running}}
	I0916 23:56:35.428877  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.447492  804231 cli_runner.go:164] Run: docker exec ha-472903 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:56:35.490145  804231 oci.go:144] the created container "ha-472903" has a running status.
	I0916 23:56:35.490177  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa...
	I0916 23:56:35.748917  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:56:35.748974  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:56:35.776040  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.795374  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:56:35.795403  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:56:35.841194  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.859165  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:56:35.859278  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:35.877348  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:35.877637  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:35.877654  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:56:36.014327  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0916 23:56:36.014356  804231 ubuntu.go:182] provisioning hostname "ha-472903"
	I0916 23:56:36.014430  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.033295  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:36.033543  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:36.033558  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903 && echo "ha-472903" | sudo tee /etc/hostname
	I0916 23:56:36.178557  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0916 23:56:36.178627  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.196584  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:36.196791  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:36.196814  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:56:36.331895  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:56:36.331954  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:56:36.331987  804231 ubuntu.go:190] setting up certificates
	I0916 23:56:36.332000  804231 provision.go:84] configureAuth start
	I0916 23:56:36.332062  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.350923  804231 provision.go:143] copyHostCerts
	I0916 23:56:36.350968  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:56:36.351011  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:56:36.351021  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:56:36.351100  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:56:36.351216  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:56:36.351254  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:56:36.351265  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:56:36.351307  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:56:36.351374  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:56:36.351400  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:56:36.351409  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:56:36.351461  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:56:36.351538  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903 san=[127.0.0.1 192.168.49.2 ha-472903 localhost minikube]
	I0916 23:56:36.406870  804231 provision.go:177] copyRemoteCerts
	I0916 23:56:36.406927  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:56:36.406977  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.424064  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.520663  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:56:36.520737  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:56:36.546100  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:56:36.546162  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 23:56:36.569886  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:56:36.569946  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:56:36.593694  804231 provision.go:87] duration metric: took 261.676108ms to configureAuth
	I0916 23:56:36.593725  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:56:36.593891  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:36.593903  804231 machine.go:96] duration metric: took 734.71199ms to provisionDockerMachine
	I0916 23:56:36.593911  804231 client.go:171] duration metric: took 6.115914604s to LocalClient.Create
	I0916 23:56:36.593933  804231 start.go:167] duration metric: took 6.115991162s to libmachine.API.Create "ha-472903"
	I0916 23:56:36.593942  804231 start.go:293] postStartSetup for "ha-472903" (driver="docker")
	I0916 23:56:36.593950  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:56:36.593994  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:56:36.594038  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.611127  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.708294  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:56:36.711629  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:56:36.711662  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:56:36.711669  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:56:36.711677  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:56:36.711690  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:56:36.711734  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:56:36.711817  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:56:36.711829  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:56:36.711917  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:56:36.720521  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:56:36.746614  804231 start.go:296] duration metric: took 152.657806ms for postStartSetup
	I0916 23:56:36.746970  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.763912  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:36.764159  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:56:36.764204  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.781099  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.872372  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:56:36.876670  804231 start.go:128] duration metric: took 6.400768235s to createHost
	I0916 23:56:36.876701  804231 start.go:83] releasing machines lock for "ha-472903", held for 6.400928988s
	I0916 23:56:36.876787  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.894080  804231 ssh_runner.go:195] Run: cat /version.json
	I0916 23:56:36.894094  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:56:36.894141  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.894182  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.912628  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.913001  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:37.079386  804231 ssh_runner.go:195] Run: systemctl --version
	I0916 23:56:37.084104  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:56:37.088563  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:56:37.116786  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:56:37.116846  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:56:37.142716  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:56:37.142738  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:56:37.142772  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:56:37.142832  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:56:37.154693  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:56:37.165920  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:56:37.165978  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:56:37.179227  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:56:37.192751  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:56:37.255915  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:56:37.324761  804231 docker.go:234] disabling docker service ...
	I0916 23:56:37.324836  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:56:37.342233  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:56:37.353324  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:56:37.420555  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:56:37.486396  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:56:37.497453  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:56:37.513435  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:56:37.524399  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:56:37.534072  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:56:37.534132  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:56:37.543872  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:56:37.553478  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:56:37.562918  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:56:37.572431  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:56:37.581176  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:56:37.590540  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:56:37.599825  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:56:37.609340  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:56:37.617500  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:56:37.625771  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:56:37.685687  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:56:37.787201  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:56:37.787275  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:56:37.791126  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:56:37.791200  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:56:37.794684  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:56:37.828753  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:56:37.828806  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:56:37.851610  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:56:37.876577  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:56:37.877711  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:37.894044  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:56:37.897995  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:56:37.909702  804231 kubeadm.go:875] updating cluster {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:56:37.909830  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:37.909936  804231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:56:37.943964  804231 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 23:56:37.943985  804231 containerd.go:534] Images already preloaded, skipping extraction
	I0916 23:56:37.944040  804231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:56:37.976374  804231 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 23:56:37.976397  804231 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:56:37.976405  804231 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0916 23:56:37.976525  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:56:37.976590  804231 ssh_runner.go:195] Run: sudo crictl info
	I0916 23:56:38.009585  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:38.009608  804231 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:56:38.009620  804231 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:56:38.009642  804231 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-472903 NodeName:ha-472903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:56:38.009740  804231 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-472903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:56:38.009763  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:56:38.009799  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:56:38.022796  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:56:38.022978  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:56:38.023041  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:56:38.032162  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:56:38.032241  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 23:56:38.040936  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 23:56:38.058672  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:56:38.079097  804231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0916 23:56:38.097183  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0916 23:56:38.116629  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:56:38.120221  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:56:38.131205  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:56:38.195735  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:56:38.216649  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.2
	I0916 23:56:38.216671  804231 certs.go:194] generating shared ca certs ...
	I0916 23:56:38.216692  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.216854  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:56:38.216907  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:56:38.216920  804231 certs.go:256] generating profile certs ...
	I0916 23:56:38.216989  804231 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:56:38.217007  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt with IP's: []
	I0916 23:56:38.286683  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt ...
	I0916 23:56:38.286713  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt: {Name:mk764ef4ac73429cea14d799835f3822d8afb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.286876  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key ...
	I0916 23:56:38.286887  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key: {Name:mk988f40b7ad20c61b4ffc19afd15eea50787a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.286965  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8
	I0916 23:56:38.286981  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0916 23:56:38.411782  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 ...
	I0916 23:56:38.411812  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8: {Name:mkbca9fcc4cd73eb913b43ef67240975ba048601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.411977  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8 ...
	I0916 23:56:38.411990  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8: {Name:mk56f7fb29011c6372caaf96dfdbcab1b202e8b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.412061  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:56:38.412138  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:56:38.412190  804231 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:56:38.412204  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt with IP's: []
	I0916 23:56:38.735728  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt ...
	I0916 23:56:38.735759  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt: {Name:mke25602938652bbe51197bb8e5738dfc5dca50b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.735935  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key ...
	I0916 23:56:38.735947  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key: {Name:mkc7d616357a8be8181d43ca8cb33ab512ce94dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.736027  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:56:38.736044  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:56:38.736055  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:56:38.736068  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:56:38.736078  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:56:38.736090  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:56:38.736105  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:56:38.736115  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:56:38.736175  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:56:38.736210  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:56:38.736218  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:56:38.736242  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:56:38.736266  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:56:38.736284  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:56:38.736322  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:56:38.736347  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:56:38.736360  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:38.736372  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:56:38.736905  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:56:38.762142  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:56:38.786590  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:56:38.810694  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:56:38.834521  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 23:56:38.858677  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:56:38.881975  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:56:38.906146  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:56:38.929698  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:56:38.955154  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:56:38.978551  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:56:39.001782  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:56:39.019405  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:56:39.024868  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:56:39.034165  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.038348  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.038407  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.045172  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:56:39.054735  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:56:39.065180  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.068976  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.069038  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.075920  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:56:39.085838  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:56:39.095394  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.098966  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.099019  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.105643  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:56:39.114800  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:56:39.117988  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:56:39.118033  804231 kubeadm.go:392] StartCluster: {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:56:39.118097  804231 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 23:56:39.118132  804231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 23:56:39.154291  804231 cri.go:89] found id: ""
	I0916 23:56:39.154361  804231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:56:39.163485  804231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:56:39.172454  804231 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:56:39.172499  804231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:56:39.181066  804231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:56:39.181098  804231 kubeadm.go:157] found existing configuration files:
	
	I0916 23:56:39.181131  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:56:39.189824  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:56:39.189873  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:56:39.198165  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:56:39.206772  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:56:39.206819  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:56:39.215119  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:56:39.223660  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:56:39.223717  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:56:39.232099  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:56:39.240514  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:56:39.240559  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:56:39.248850  804231 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:56:39.285897  804231 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:56:39.285950  804231 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:56:39.300660  804231 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:56:39.300727  804231 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:56:39.300801  804231 kubeadm.go:310] OS: Linux
	I0916 23:56:39.300901  804231 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:56:39.300975  804231 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:56:39.301037  804231 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:56:39.301080  804231 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:56:39.301127  804231 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:56:39.301169  804231 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:56:39.301211  804231 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:56:39.301268  804231 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:56:39.351787  804231 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:56:39.351909  804231 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:56:39.351995  804231 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:56:39.358062  804231 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:56:39.360794  804231 out.go:252]   - Generating certificates and keys ...
	I0916 23:56:39.360906  804231 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:56:39.360984  804231 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:56:39.805287  804231 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:56:40.002708  804231 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:56:40.279763  804231 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:56:40.813028  804231 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:56:41.074848  804231 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:56:41.075343  804231 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-472903 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:56:41.124880  804231 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:56:41.125041  804231 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-472903 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:56:41.707716  804231 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:56:42.089212  804231 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:56:42.627038  804231 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:56:42.627119  804231 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:56:42.823901  804231 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:56:43.022989  804231 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:56:43.163778  804231 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:56:43.708743  804231 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:56:44.024642  804231 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:56:44.025130  804231 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:56:44.027319  804231 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:56:44.029599  804231 out.go:252]   - Booting up control plane ...
	I0916 23:56:44.029737  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:56:44.029842  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:56:44.030181  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:56:44.039957  804231 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:56:44.040118  804231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:56:44.047794  804231 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:56:44.048177  804231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:56:44.048269  804231 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:56:44.122629  804231 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:56:44.122739  804231 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:56:45.124352  804231 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001822735s
	I0916 23:56:45.127338  804231 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:56:45.127477  804231 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:56:45.127582  804231 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:56:45.127694  804231 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:56:47.478256  804231 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.350892202s
	I0916 23:56:47.717698  804231 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.590223043s
	I0916 23:56:49.129161  804231 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001748341s
	I0916 23:56:49.140036  804231 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:56:49.148779  804231 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:56:49.158010  804231 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:56:49.158279  804231 kubeadm.go:310] [mark-control-plane] Marking the node ha-472903 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:56:49.165085  804231 kubeadm.go:310] [bootstrap-token] Using token: 4apri1.yqe8ok7wc4ltba21
	I0916 23:56:49.166180  804231 out.go:252]   - Configuring RBAC rules ...
	I0916 23:56:49.166328  804231 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:56:49.169225  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:56:49.174527  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:56:49.176741  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:56:49.178892  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:56:49.181107  804231 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:56:49.534440  804231 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:56:49.948567  804231 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:56:50.534581  804231 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:56:50.535429  804231 kubeadm.go:310] 
	I0916 23:56:50.535529  804231 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:56:50.535542  804231 kubeadm.go:310] 
	I0916 23:56:50.535650  804231 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:56:50.535660  804231 kubeadm.go:310] 
	I0916 23:56:50.535696  804231 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:56:50.535801  804231 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:56:50.535858  804231 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:56:50.535872  804231 kubeadm.go:310] 
	I0916 23:56:50.535940  804231 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:56:50.535949  804231 kubeadm.go:310] 
	I0916 23:56:50.536027  804231 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:56:50.536037  804231 kubeadm.go:310] 
	I0916 23:56:50.536125  804231 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:56:50.536212  804231 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:56:50.536280  804231 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:56:50.536286  804231 kubeadm.go:310] 
	I0916 23:56:50.536356  804231 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:56:50.536441  804231 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:56:50.536448  804231 kubeadm.go:310] 
	I0916 23:56:50.536543  804231 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4apri1.yqe8ok7wc4ltba21 \
	I0916 23:56:50.536688  804231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 \
	I0916 23:56:50.536722  804231 kubeadm.go:310] 	--control-plane 
	I0916 23:56:50.536731  804231 kubeadm.go:310] 
	I0916 23:56:50.536842  804231 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:56:50.536857  804231 kubeadm.go:310] 
	I0916 23:56:50.536947  804231 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4apri1.yqe8ok7wc4ltba21 \
	I0916 23:56:50.537084  804231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 
	I0916 23:56:50.539097  804231 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:56:50.539238  804231 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:56:50.539264  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:50.539274  804231 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:56:50.540523  804231 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:56:50.541480  804231 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:56:50.545518  804231 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:56:50.545534  804231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:56:50.563251  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:56:50.762002  804231 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:56:50.762092  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:50.762127  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903 minikube.k8s.io/updated_at=2025_09_16T23_56_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=true
	I0916 23:56:50.771679  804231 ops.go:34] apiserver oom_adj: -16
	I0916 23:56:50.843646  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:51.344428  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:51.844440  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:52.344316  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:52.844594  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:53.343854  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:53.844615  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:54.344057  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:54.844066  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.344374  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.844478  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.927027  804231 kubeadm.go:1105] duration metric: took 5.165002596s to wait for elevateKubeSystemPrivileges
	I0916 23:56:55.927062  804231 kubeadm.go:394] duration metric: took 16.809033965s to StartCluster
	I0916 23:56:55.927081  804231 settings.go:142] acquiring lock: {Name:mk6c1a5bee23e141aad5180323c16c47ed580ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:55.927146  804231 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0916 23:56:55.927785  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:55.928026  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:56:55.928018  804231 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:55.928038  804231 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 23:56:55.928103  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:56:55.928121  804231 addons.go:69] Setting default-storageclass=true in profile "ha-472903"
	I0916 23:56:55.928148  804231 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-472903"
	I0916 23:56:55.928126  804231 addons.go:69] Setting storage-provisioner=true in profile "ha-472903"
	I0916 23:56:55.928222  804231 addons.go:238] Setting addon storage-provisioner=true in "ha-472903"
	I0916 23:56:55.928269  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:56:55.928296  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:55.928610  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.928740  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.954806  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:56:55.955519  804231 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0916 23:56:55.955545  804231 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0916 23:56:55.955543  804231 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0916 23:56:55.955553  804231 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 23:56:55.955611  804231 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0916 23:56:55.955620  804231 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 23:56:55.956096  804231 addons.go:238] Setting addon default-storageclass=true in "ha-472903"
	I0916 23:56:55.956145  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:56:55.956685  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.957279  804231 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:56:55.961536  804231 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:56:55.961557  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:56:55.961614  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:55.979896  804231 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:56:55.979925  804231 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:56:55.979985  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:55.982838  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:55.999402  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:56.011618  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:56:56.095355  804231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:56:56.110814  804231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:56:56.153646  804231 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:56:56.360175  804231 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0916 23:56:56.361116  804231 addons.go:514] duration metric: took 433.076562ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 23:56:56.361149  804231 start.go:246] waiting for cluster config update ...
	I0916 23:56:56.361163  804231 start.go:255] writing updated cluster config ...
	I0916 23:56:56.362407  804231 out.go:203] 
	I0916 23:56:56.363527  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:56.363621  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:56.364993  804231 out.go:179] * Starting "ha-472903-m02" control-plane node in "ha-472903" cluster
	I0916 23:56:56.365873  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:56:56.366751  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:56.367539  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:56.367556  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:56.367630  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:56.367646  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:56.367654  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:56:56.367711  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:56.386547  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:56.386565  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:56.386580  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:56.386607  804231 start.go:360] acquireMachinesLock for ha-472903-m02: {Name:mk81d8c73856cf84ceff1767a1681f3f3cdab773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:56.386700  804231 start.go:364] duration metric: took 70.184µs to acquireMachinesLock for "ha-472903-m02"
	I0916 23:56:56.386738  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:56.386824  804231 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 23:56:56.388402  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:56.388536  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:56:56.388563  804231 client.go:168] LocalClient.Create starting
	I0916 23:56:56.388626  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:56:56.388664  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:56.388687  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:56.388757  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:56:56.388789  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:56.388804  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:56.389042  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:56.404624  804231 network_create.go:77] Found existing network {name:ha-472903 subnet:0xc001d2d140 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:56:56.404653  804231 kic.go:121] calculated static IP "192.168.49.3" for the "ha-472903-m02" container
	I0916 23:56:56.404719  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:56.420231  804231 cli_runner.go:164] Run: docker volume create ha-472903-m02 --label name.minikube.sigs.k8s.io=ha-472903-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:56.436361  804231 oci.go:103] Successfully created a docker volume ha-472903-m02
	I0916 23:56:56.436430  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m02 --entrypoint /usr/bin/test -v ha-472903-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:56.943375  804231 oci.go:107] Successfully prepared a docker volume ha-472903-m02
	I0916 23:56:56.943427  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:56.943455  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:56.943528  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:01.091161  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.147592491s)
	I0916 23:57:01.091197  804231 kic.go:203] duration metric: took 4.147738136s to extract preloaded images to volume ...
	W0916 23:57:01.091312  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:01.091355  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:01.091403  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:01.142900  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903-m02 --name ha-472903-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903-m02 --network ha-472903 --ip 192.168.49.3 --volume ha-472903-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:01.378924  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Running}}
	I0916 23:57:01.396232  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.412927  804231 cli_runner.go:164] Run: docker exec ha-472903-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:01.469205  804231 oci.go:144] the created container "ha-472903-m02" has a running status.
	I0916 23:57:01.469235  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa...
	I0916 23:57:01.517570  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:01.517621  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:01.540818  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.560831  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:01.560858  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:01.615037  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.637921  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:01.638030  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.659741  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.660056  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.660078  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:01.800716  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0916 23:57:01.800749  804231 ubuntu.go:182] provisioning hostname "ha-472903-m02"
	I0916 23:57:01.800817  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.819791  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.820013  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.820030  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m02 && echo "ha-472903-m02" | sudo tee /etc/hostname
	I0916 23:57:01.967539  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0916 23:57:01.967631  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.987814  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.988031  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.988047  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:02.121536  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:02.121571  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:57:02.121588  804231 ubuntu.go:190] setting up certificates
	I0916 23:57:02.121602  804231 provision.go:84] configureAuth start
	I0916 23:57:02.121663  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.139056  804231 provision.go:143] copyHostCerts
	I0916 23:57:02.139098  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:02.139135  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:57:02.139147  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:02.139221  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:57:02.139329  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:02.139362  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:57:02.139372  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:02.139430  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:57:02.139521  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:02.139549  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:57:02.139559  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:02.139599  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:57:02.139690  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m02 san=[127.0.0.1 192.168.49.3 ha-472903-m02 localhost minikube]
	I0916 23:57:02.262354  804231 provision.go:177] copyRemoteCerts
	I0916 23:57:02.262428  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:02.262491  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.279792  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.375833  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:02.375903  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:02.400316  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:02.400373  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:02.422506  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:02.422550  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:57:02.445091  804231 provision.go:87] duration metric: took 323.464176ms to configureAuth
	I0916 23:57:02.445121  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:02.445295  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:02.445313  804231 machine.go:96] duration metric: took 807.372883ms to provisionDockerMachine
	I0916 23:57:02.445320  804231 client.go:171] duration metric: took 6.056751196s to LocalClient.Create
	I0916 23:57:02.445337  804231 start.go:167] duration metric: took 6.056804276s to libmachine.API.Create "ha-472903"
	I0916 23:57:02.445346  804231 start.go:293] postStartSetup for "ha-472903-m02" (driver="docker")
	I0916 23:57:02.445354  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:02.445402  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:02.445461  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.463550  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.559528  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:02.562755  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:02.562780  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:02.562787  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:02.562793  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:02.562803  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:57:02.562847  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:57:02.562920  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:57:02.562930  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:57:02.563018  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:02.571142  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:02.596466  804231 start.go:296] duration metric: took 151.106324ms for postStartSetup
	I0916 23:57:02.596768  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.613316  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:02.613561  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:02.613601  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.632056  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.723085  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:02.727430  804231 start.go:128] duration metric: took 6.340577447s to createHost
	I0916 23:57:02.727453  804231 start.go:83] releasing machines lock for "ha-472903-m02", held for 6.34073897s
	I0916 23:57:02.727519  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.746152  804231 out.go:179] * Found network options:
	I0916 23:57:02.747248  804231 out.go:179]   - NO_PROXY=192.168.49.2
	W0916 23:57:02.748187  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:02.748240  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:02.748311  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:02.748360  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.748367  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:02.748427  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.765286  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.766625  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.856922  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:02.936692  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:02.936761  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:02.961822  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:02.961845  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:57:02.961878  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:02.961919  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:57:02.973318  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:02.983927  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:57:02.983969  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:57:02.996091  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:57:03.009314  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:57:03.072565  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:57:03.140469  804231 docker.go:234] disabling docker service ...
	I0916 23:57:03.140526  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:57:03.157179  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:57:03.167955  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:57:03.233386  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:57:03.296537  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:03.307574  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:03.323754  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:03.334305  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:03.343767  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:03.343826  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:03.353029  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:03.361991  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:03.371206  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:03.380598  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:03.389216  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:03.398125  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:03.407145  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:03.416183  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:03.424123  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:03.432185  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:03.493561  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:03.591942  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:57:03.592010  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:57:03.595710  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:57:03.595768  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:57:03.599108  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:03.633181  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:57:03.633231  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:03.656364  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:03.680150  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:57:03.681177  804231 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:03.682053  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:03.699720  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:03.703306  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:03.714275  804231 mustload.go:65] Loading cluster: ha-472903
	I0916 23:57:03.714452  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:03.714650  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:57:03.730631  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:03.730849  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.3
	I0916 23:57:03.730859  804231 certs.go:194] generating shared ca certs ...
	I0916 23:57:03.730877  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.730987  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:57:03.731023  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:57:03.731032  804231 certs.go:256] generating profile certs ...
	I0916 23:57:03.731092  804231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:57:03.731114  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a
	I0916 23:57:03.731125  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 23:57:03.830248  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a ...
	I0916 23:57:03.830275  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a: {Name:mk3e97859392ca0d50685e4c31c19acd3c590753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.830438  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a ...
	I0916 23:57:03.830453  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a: {Name:mkd3ec6288ef831df369d4ec39839c410f5116ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.830530  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:57:03.830653  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:57:03.830779  804231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:57:03.830794  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:03.830809  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:03.830823  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:03.830836  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:03.830846  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:03.830855  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:03.830864  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:03.830873  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:03.830920  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:57:03.830952  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:03.830962  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:03.830981  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:03.831001  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:03.831021  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:57:03.831058  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:03.831081  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:57:03.831094  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:57:03.831107  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:03.831156  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:03.847964  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:03.934599  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:03.938331  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:03.950286  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:03.953541  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:03.965169  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:03.968351  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:03.979814  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:03.982969  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:03.993972  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:03.997171  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:04.008607  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:04.011687  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 23:57:04.023019  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:04.046509  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:04.069781  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:04.092702  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:04.114933  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 23:57:04.137173  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0916 23:57:04.159280  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:04.181367  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:04.203980  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:57:04.230248  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:57:04.253628  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:04.276223  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:04.293552  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:04.309978  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:04.326237  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:04.342704  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:04.359099  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 23:57:04.375242  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:04.391611  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:57:04.396637  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:57:04.405389  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.408604  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.408651  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.414862  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:04.423583  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:57:04.432421  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.435706  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.435752  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.441863  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:04.450595  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:04.459588  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.462866  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.462907  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.469279  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:04.478135  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:04.481236  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:04.481288  804231 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0916 23:57:04.481383  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:04.481425  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:04.481462  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:04.492937  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:04.492999  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:04.493041  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:04.501084  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:04.501123  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:04.509217  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 23:57:04.525587  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:04.544042  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:04.561542  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:04.564725  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:04.574819  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:04.638378  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:04.659569  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:04.659878  804231 start.go:317] joinCluster: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:04.659986  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:04.660033  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:04.678136  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:04.817608  804231 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:04.817663  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 79akng.11lpa8n1ba4yh5m1 --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0916 23:57:23.327384  804231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 79akng.11lpa8n1ba4yh5m1 --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.509693377s)
	I0916 23:57:23.327447  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:23.521334  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903-m02 minikube.k8s.io/updated_at=2025_09_16T23_57_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=false
	I0916 23:57:23.592991  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-472903-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:23.664899  804231 start.go:319] duration metric: took 19.005017018s to joinCluster
	I0916 23:57:23.664975  804231 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:23.665223  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:23.665877  804231 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:23.666680  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:23.766393  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:23.779164  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:23.779228  804231 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:23.779511  804231 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m02" to be "Ready" ...
	I0916 23:57:24.283593  804231 node_ready.go:49] node "ha-472903-m02" is "Ready"
	I0916 23:57:24.283628  804231 node_ready.go:38] duration metric: took 504.097895ms for node "ha-472903-m02" to be "Ready" ...
	I0916 23:57:24.283648  804231 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:24.283699  804231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:24.295735  804231 api_server.go:72] duration metric: took 630.723924ms to wait for apiserver process to appear ...
	I0916 23:57:24.295758  804231 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:24.295774  804231 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:24.299650  804231 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:24.300537  804231 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:24.300558  804231 api_server.go:131] duration metric: took 4.795429ms to wait for apiserver health ...
	I0916 23:57:24.300566  804231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:24.304572  804231 system_pods.go:59] 19 kube-system pods found
	I0916 23:57:24.304598  804231 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:24.304604  804231 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:24.304608  804231 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:24.304611  804231 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Pending
	I0916 23:57:24.304615  804231 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:24.304621  804231 system_pods.go:61] "kindnet-mwf8l" [8c9533d3-defe-487b-a9b4-0502fb8f2d2a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mwf8l": pod kindnet-mwf8l is being deleted, cannot be assigned to a host)
	I0916 23:57:24.304628  804231 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-q7c7s": pod kindnet-q7c7s is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304639  804231 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:24.304643  804231 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Pending
	I0916 23:57:24.304646  804231 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:24.304650  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Pending
	I0916 23:57:24.304657  804231 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-58lkb": pod kube-proxy-58lkb is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304662  804231 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:24.304666  804231 system_pods.go:61] "kube-proxy-mf26q" [34502b32-75c1-4078-abd2-4e4d625252d8] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-mf26q": pod kube-proxy-mf26q is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304670  804231 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:24.304677  804231 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Pending
	I0916 23:57:24.304679  804231 system_pods.go:61] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:24.304682  804231 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Pending
	I0916 23:57:24.304687  804231 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:24.304694  804231 system_pods.go:74] duration metric: took 4.122792ms to wait for pod list to return data ...
	I0916 23:57:24.304700  804231 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:24.307165  804231 default_sa.go:45] found service account: "default"
	I0916 23:57:24.307183  804231 default_sa.go:55] duration metric: took 2.474442ms for default service account to be created ...
	I0916 23:57:24.307190  804231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:24.310491  804231 system_pods.go:86] 19 kube-system pods found
	I0916 23:57:24.310512  804231 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:24.310517  804231 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:24.310520  804231 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:24.310524  804231 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Pending
	I0916 23:57:24.310527  804231 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:24.310532  804231 system_pods.go:89] "kindnet-mwf8l" [8c9533d3-defe-487b-a9b4-0502fb8f2d2a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mwf8l": pod kindnet-mwf8l is being deleted, cannot be assigned to a host)
	I0916 23:57:24.310556  804231 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-q7c7s": pod kindnet-q7c7s is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310566  804231 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:24.310571  804231 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Pending
	I0916 23:57:24.310576  804231 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:24.310580  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Pending
	I0916 23:57:24.310588  804231 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-58lkb": pod kube-proxy-58lkb is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310591  804231 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:24.310596  804231 system_pods.go:89] "kube-proxy-mf26q" [34502b32-75c1-4078-abd2-4e4d625252d8] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-mf26q": pod kube-proxy-mf26q is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310600  804231 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:24.310603  804231 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Pending
	I0916 23:57:24.310608  804231 system_pods.go:89] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:24.310611  804231 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Pending
	I0916 23:57:24.310614  804231 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:24.310621  804231 system_pods.go:126] duration metric: took 3.426124ms to wait for k8s-apps to be running ...
	I0916 23:57:24.310629  804231 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:24.310666  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:24.322152  804231 system_svc.go:56] duration metric: took 11.515834ms WaitForService to wait for kubelet
	I0916 23:57:24.322176  804231 kubeadm.go:578] duration metric: took 657.167547ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:24.322199  804231 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:24.327707  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:24.327734  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:24.327748  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:24.327754  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:24.327759  804231 node_conditions.go:105] duration metric: took 5.554046ms to run NodePressure ...
	I0916 23:57:24.327772  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:57:24.327803  804231 start.go:255] writing updated cluster config ...
	I0916 23:57:24.329316  804231 out.go:203] 
	I0916 23:57:24.330356  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:24.330485  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:24.331956  804231 out.go:179] * Starting "ha-472903-m03" control-plane node in "ha-472903" cluster
	I0916 23:57:24.332973  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:57:24.333962  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:24.334852  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:57:24.334875  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:24.334942  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:24.334986  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:24.334997  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:57:24.335117  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:24.357217  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:24.357233  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:24.357242  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:24.357267  804231 start.go:360] acquireMachinesLock for ha-472903-m03: {Name:mk61000bb8e4699ca3310a7fc257e30a156b69de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:24.357354  804231 start.go:364] duration metric: took 71.354µs to acquireMachinesLock for "ha-472903-m03"
	I0916 23:57:24.357375  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:24.357498  804231 start.go:125] createHost starting for "m03" (driver="docker")
	I0916 23:57:24.358917  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:24.358994  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:57:24.359023  804231 client.go:168] LocalClient.Create starting
	I0916 23:57:24.359071  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:57:24.359103  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:24.359116  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:24.359164  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:57:24.359182  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:24.359192  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:24.359366  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:24.375654  804231 network_create.go:77] Found existing network {name:ha-472903 subnet:0xc001b33bf0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:24.375684  804231 kic.go:121] calculated static IP "192.168.49.4" for the "ha-472903-m03" container
	I0916 23:57:24.375740  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:24.392165  804231 cli_runner.go:164] Run: docker volume create ha-472903-m03 --label name.minikube.sigs.k8s.io=ha-472903-m03 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:24.408273  804231 oci.go:103] Successfully created a docker volume ha-472903-m03
	I0916 23:57:24.408342  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m03 --entrypoint /usr/bin/test -v ha-472903-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:24.957699  804231 oci.go:107] Successfully prepared a docker volume ha-472903-m03
	I0916 23:57:24.957748  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:57:24.957783  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:24.957856  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:29.095091  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.13717471s)
	I0916 23:57:29.095123  804231 kic.go:203] duration metric: took 4.137337977s to extract preloaded images to volume ...
	W0916 23:57:29.095214  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:29.095253  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:29.095300  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:29.145859  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903-m03 --name ha-472903-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903-m03 --network ha-472903 --ip 192.168.49.4 --volume ha-472903-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:29.392873  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Running}}
	I0916 23:57:29.412389  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:29.430593  804231 cli_runner.go:164] Run: docker exec ha-472903-m03 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:29.476672  804231 oci.go:144] the created container "ha-472903-m03" has a running status.
	I0916 23:57:29.476707  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa...
	I0916 23:57:29.927926  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:29.927968  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:29.954518  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:29.975503  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:29.975522  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:30.023965  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:30.040966  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:30.041051  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.058157  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.058388  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.058400  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:30.190964  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0916 23:57:30.190995  804231 ubuntu.go:182] provisioning hostname "ha-472903-m03"
	I0916 23:57:30.191059  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.208862  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.209123  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.209144  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m03 && echo "ha-472903-m03" | sudo tee /etc/hostname
	I0916 23:57:30.354363  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0916 23:57:30.354466  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.372285  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.372570  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.372590  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:30.504861  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:30.504898  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:57:30.504920  804231 ubuntu.go:190] setting up certificates
	I0916 23:57:30.504933  804231 provision.go:84] configureAuth start
	I0916 23:57:30.504996  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:30.522218  804231 provision.go:143] copyHostCerts
	I0916 23:57:30.522259  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:30.522297  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:57:30.522306  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:30.522369  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:57:30.522483  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:30.522506  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:57:30.522510  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:30.522547  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:57:30.522650  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:30.522673  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:57:30.522678  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:30.522703  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:57:30.522769  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m03 san=[127.0.0.1 192.168.49.4 ha-472903-m03 localhost minikube]
	I0916 23:57:30.644066  804231 provision.go:177] copyRemoteCerts
	I0916 23:57:30.644118  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:30.644153  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.661612  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:30.757452  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:30.757504  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:30.782942  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:30.782994  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:30.806508  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:30.806562  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:30.829686  804231 provision.go:87] duration metric: took 324.735799ms to configureAuth
	I0916 23:57:30.829709  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:30.829902  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:30.829916  804231 machine.go:96] duration metric: took 788.930334ms to provisionDockerMachine
	I0916 23:57:30.829925  804231 client.go:171] duration metric: took 6.470893656s to LocalClient.Create
	I0916 23:57:30.829958  804231 start.go:167] duration metric: took 6.470963089s to libmachine.API.Create "ha-472903"
	I0916 23:57:30.829971  804231 start.go:293] postStartSetup for "ha-472903-m03" (driver="docker")
	I0916 23:57:30.829982  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:30.830042  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:30.830092  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.847215  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:30.945849  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:30.949055  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:30.949086  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:30.949098  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:30.949107  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:30.949120  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:57:30.949174  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:57:30.949274  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:57:30.949286  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:57:30.949392  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:30.957998  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:30.983779  804231 start.go:296] duration metric: took 153.794843ms for postStartSetup
	I0916 23:57:30.984109  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:31.001367  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:31.001618  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:31.001659  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.019034  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.110814  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:31.115046  804231 start.go:128] duration metric: took 6.757532739s to createHost
	I0916 23:57:31.115072  804231 start.go:83] releasing machines lock for "ha-472903-m03", held for 6.757707303s
	I0916 23:57:31.115154  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:31.133371  804231 out.go:179] * Found network options:
	I0916 23:57:31.134481  804231 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 23:57:31.135570  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135598  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135626  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135644  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:31.135714  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:31.135763  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.135778  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:31.135845  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.152320  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.153909  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.320495  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:31.348141  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:31.348214  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:31.373693  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:31.373720  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:57:31.373748  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:31.373802  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:57:31.385560  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:31.396165  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:57:31.396214  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:57:31.409119  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:57:31.422244  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:57:31.489491  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:57:31.557098  804231 docker.go:234] disabling docker service ...
	I0916 23:57:31.557149  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:57:31.574601  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:57:31.585773  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:57:31.649988  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:57:31.717070  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:31.727904  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:31.743685  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:31.755962  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:31.766072  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:31.766138  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:31.775522  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:31.785914  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:31.795134  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:31.804565  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:31.813319  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:31.822500  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:31.831597  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:31.840887  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:31.848842  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:31.857026  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:31.920521  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:32.022746  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:57:32.022804  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:57:32.026838  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:57:32.026888  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:57:32.030295  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:32.064100  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:57:32.064158  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:32.088276  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:32.114182  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:57:32.115194  804231 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:32.116236  804231 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 23:57:32.117151  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:32.133290  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:32.136901  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:32.147860  804231 mustload.go:65] Loading cluster: ha-472903
	I0916 23:57:32.148060  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:32.148275  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:57:32.164278  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:32.164570  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.4
	I0916 23:57:32.164584  804231 certs.go:194] generating shared ca certs ...
	I0916 23:57:32.164601  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.164751  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:57:32.164800  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:57:32.164814  804231 certs.go:256] generating profile certs ...
	I0916 23:57:32.164911  804231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:57:32.164940  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8
	I0916 23:57:32.164958  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 23:57:32.342596  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 ...
	I0916 23:57:32.342623  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8: {Name:mk455c3f0ae4544ddcdf75c25cbd1b87a24e61a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.342787  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8 ...
	I0916 23:57:32.342799  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8: {Name:mkbd551bf9ae23c129f7e263550d20b4aac5d095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.342871  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:57:32.343007  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:57:32.343136  804231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:57:32.343152  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:32.343165  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:32.343178  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:32.343191  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:32.343204  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:32.343214  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:32.343229  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:32.343247  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:32.343299  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:57:32.343327  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:32.343337  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:32.343357  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:32.343379  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:32.343400  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:57:32.343464  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:32.343501  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.343521  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.343534  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.343588  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:32.360782  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:32.447595  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:32.451217  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:32.464033  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:32.467273  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:32.478860  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:32.482180  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:32.493717  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:32.496761  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:32.507849  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:32.511054  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:32.523733  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:32.526954  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 23:57:32.538314  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:32.561866  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:32.585900  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:32.610048  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:32.634812  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 23:57:32.659163  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:32.682157  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:32.704663  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:32.727856  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:32.752740  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:57:32.775900  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:57:32.798720  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:32.815542  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:32.832241  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:32.848964  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:32.865780  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:32.882614  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 23:57:32.899296  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:32.916516  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:57:32.921611  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:57:32.930917  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.934241  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.934283  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.941354  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:32.950335  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:32.959292  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.962576  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.962623  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.968989  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:32.978331  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:57:32.987188  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.990463  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.990497  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.996813  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:33.005924  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:33.009122  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:33.009183  804231 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0916 23:57:33.009266  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:33.009291  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:33.009319  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:33.021189  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:33.021246  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:33.021293  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:33.029533  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:33.029576  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:33.038861  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 23:57:33.056092  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:33.075506  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:33.093918  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:33.097171  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:33.107668  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:33.167706  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:33.188453  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:33.188671  804231 start.go:317] joinCluster: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:33.188781  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:33.188819  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:33.210165  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:33.351871  804231 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:33.351930  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uj456s.97hymgg3kmg6owuv --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0916 23:57:51.860237  804231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uj456s.97hymgg3kmg6owuv --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (18.508258539s)
	I0916 23:57:51.860308  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:52.080986  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903-m03 minikube.k8s.io/updated_at=2025_09_16T23_57_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=false
	I0916 23:57:52.152525  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-472903-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:52.226560  804231 start.go:319] duration metric: took 19.037884553s to joinCluster
	I0916 23:57:52.226624  804231 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:52.226912  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:52.227744  804231 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:52.228620  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:52.334638  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:52.349036  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:52.349105  804231 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:52.349317  804231 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m03" to be "Ready" ...
	I0916 23:57:54.352346  804231 node_ready.go:49] node "ha-472903-m03" is "Ready"
	I0916 23:57:54.352374  804231 node_ready.go:38] duration metric: took 2.003044453s for node "ha-472903-m03" to be "Ready" ...
	I0916 23:57:54.352389  804231 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:54.352476  804231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:54.365259  804231 api_server.go:72] duration metric: took 2.138606454s to wait for apiserver process to appear ...
	I0916 23:57:54.365280  804231 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:54.365298  804231 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:54.370985  804231 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:54.371831  804231 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:54.371850  804231 api_server.go:131] duration metric: took 6.564025ms to wait for apiserver health ...
	I0916 23:57:54.371858  804231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:54.376785  804231 system_pods.go:59] 27 kube-system pods found
	I0916 23:57:54.376811  804231 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:54.376815  804231 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:54.376818  804231 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:54.376822  804231 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0916 23:57:54.376824  804231 system_pods.go:61] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Pending
	I0916 23:57:54.376830  804231 system_pods.go:61] "kindnet-2dqnn" [f5c4164d-0d88-4b7b-bc52-18a7e211fe98] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2dqnn": pod kindnet-2dqnn is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376833  804231 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:54.376838  804231 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0916 23:57:54.376842  804231 system_pods.go:61] "kindnet-wwdfr" [e86a6e30-712e-4d39-a235-87489d16c0f3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wwdfr": pod kindnet-wwdfr is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376849  804231 system_pods.go:61] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Pending: PodScheduled:SchedulerError (pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) is in the cache, so can't be assumed)
	I0916 23:57:54.376853  804231 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:54.376858  804231 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running
	I0916 23:57:54.376861  804231 system_pods.go:61] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Pending
	I0916 23:57:54.376867  804231 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:54.376870  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0916 23:57:54.376873  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Pending
	I0916 23:57:54.376876  804231 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0916 23:57:54.376881  804231 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:54.376885  804231 system_pods.go:61] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-kn6nb": pod kube-proxy-kn6nb is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376889  804231 system_pods.go:61] "kube-proxy-xhlnz" [1967fed1-7529-46d0-accd-ab74751b47fa] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-xhlnz": pod kube-proxy-xhlnz is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376894  804231 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:54.376897  804231 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0916 23:57:54.376900  804231 system_pods.go:61] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Pending
	I0916 23:57:54.376904  804231 system_pods.go:61] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:54.376907  804231 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0916 23:57:54.376910  804231 system_pods.go:61] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Pending
	I0916 23:57:54.376913  804231 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:54.376918  804231 system_pods.go:74] duration metric: took 5.052009ms to wait for pod list to return data ...
	I0916 23:57:54.376925  804231 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:54.378969  804231 default_sa.go:45] found service account: "default"
	I0916 23:57:54.378989  804231 default_sa.go:55] duration metric: took 2.056584ms for default service account to be created ...
	I0916 23:57:54.378999  804231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:54.383753  804231 system_pods.go:86] 27 kube-system pods found
	I0916 23:57:54.383781  804231 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:54.383790  804231 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:54.383796  804231 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:54.383802  804231 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0916 23:57:54.383812  804231 system_pods.go:89] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Pending
	I0916 23:57:54.383821  804231 system_pods.go:89] "kindnet-2dqnn" [f5c4164d-0d88-4b7b-bc52-18a7e211fe98] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2dqnn": pod kindnet-2dqnn is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383829  804231 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:54.383837  804231 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0916 23:57:54.383842  804231 system_pods.go:89] "kindnet-wwdfr" [e86a6e30-712e-4d39-a235-87489d16c0f3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wwdfr": pod kindnet-wwdfr is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383852  804231 system_pods.go:89] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Pending: PodScheduled:SchedulerError (pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) is in the cache, so can't be assumed)
	I0916 23:57:54.383863  804231 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:54.383874  804231 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running
	I0916 23:57:54.383881  804231 system_pods.go:89] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Pending
	I0916 23:57:54.383887  804231 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:54.383895  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0916 23:57:54.383900  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Pending
	I0916 23:57:54.383908  804231 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0916 23:57:54.383913  804231 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:54.383921  804231 system_pods.go:89] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-kn6nb": pod kube-proxy-kn6nb is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383930  804231 system_pods.go:89] "kube-proxy-xhlnz" [1967fed1-7529-46d0-accd-ab74751b47fa] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-xhlnz": pod kube-proxy-xhlnz is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383939  804231 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:54.383946  804231 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0916 23:57:54.383955  804231 system_pods.go:89] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Pending
	I0916 23:57:54.383962  804231 system_pods.go:89] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:54.383967  804231 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0916 23:57:54.383975  804231 system_pods.go:89] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Pending
	I0916 23:57:54.383980  804231 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:54.383991  804231 system_pods.go:126] duration metric: took 4.985254ms to wait for k8s-apps to be running ...
	I0916 23:57:54.384002  804231 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:54.384056  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:54.395540  804231 system_svc.go:56] duration metric: took 11.532177ms WaitForService to wait for kubelet
	I0916 23:57:54.395557  804231 kubeadm.go:578] duration metric: took 2.168909422s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:54.395577  804231 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:54.398165  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398183  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398194  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398197  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398201  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398205  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398209  804231 node_conditions.go:105] duration metric: took 2.627179ms to run NodePressure ...
	I0916 23:57:54.398219  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:57:54.398248  804231 start.go:255] writing updated cluster config ...
	I0916 23:57:54.398554  804231 ssh_runner.go:195] Run: rm -f paused
	I0916 23:57:54.402187  804231 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:57:54.402686  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:57:54.405144  804231 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c94hz" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.409401  804231 pod_ready.go:94] pod "coredns-66bc5c9577-c94hz" is "Ready"
	I0916 23:57:54.409438  804231 pod_ready.go:86] duration metric: took 4.271645ms for pod "coredns-66bc5c9577-c94hz" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.409448  804231 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qn8m7" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.413536  804231 pod_ready.go:94] pod "coredns-66bc5c9577-qn8m7" is "Ready"
	I0916 23:57:54.413553  804231 pod_ready.go:86] duration metric: took 4.095453ms for pod "coredns-66bc5c9577-qn8m7" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.415699  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.419599  804231 pod_ready.go:94] pod "etcd-ha-472903" is "Ready"
	I0916 23:57:54.419618  804231 pod_ready.go:86] duration metric: took 3.899664ms for pod "etcd-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.419627  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.423363  804231 pod_ready.go:94] pod "etcd-ha-472903-m02" is "Ready"
	I0916 23:57:54.423380  804231 pod_ready.go:86] duration metric: took 3.746731ms for pod "etcd-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.423386  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.603706  804231 request.go:683] "Waited before sending request" delay="180.227617ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-472903-m03"
	I0916 23:57:54.803902  804231 request.go:683] "Waited before sending request" delay="197.349252ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:55.003954  804231 request.go:683] "Waited before sending request" delay="80.206914ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-472903-m03"
	I0916 23:57:55.203362  804231 request.go:683] "Waited before sending request" delay="196.197515ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:55.206052  804231 pod_ready.go:94] pod "etcd-ha-472903-m03" is "Ready"
	I0916 23:57:55.206075  804231 pod_ready.go:86] duration metric: took 782.683771ms for pod "etcd-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.403450  804231 request.go:683] "Waited before sending request" delay="197.254129ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0916 23:57:55.406629  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.604081  804231 request.go:683] "Waited before sending request" delay="197.327981ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903"
	I0916 23:57:55.803277  804231 request.go:683] "Waited before sending request" delay="196.28238ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:55.806023  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903" is "Ready"
	I0916 23:57:55.806053  804231 pod_ready.go:86] duration metric: took 399.400731ms for pod "kube-apiserver-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.806064  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.003360  804231 request.go:683] "Waited before sending request" delay="197.181089ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903-m02"
	I0916 23:57:56.203591  804231 request.go:683] "Waited before sending request" delay="197.334062ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:56.206593  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903-m02" is "Ready"
	I0916 23:57:56.206619  804231 pod_ready.go:86] duration metric: took 400.548564ms for pod "kube-apiserver-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.206627  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.404053  804231 request.go:683] "Waited before sending request" delay="197.330591ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903-m03"
	I0916 23:57:56.603366  804231 request.go:683] "Waited before sending request" delay="196.334008ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:56.606216  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903-m03" is "Ready"
	I0916 23:57:56.606240  804231 pod_ready.go:86] duration metric: took 399.60823ms for pod "kube-apiserver-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.803696  804231 request.go:683] "Waited before sending request" delay="197.341894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0916 23:57:56.806878  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.003237  804231 request.go:683] "Waited before sending request" delay="196.261492ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903"
	I0916 23:57:57.203189  804231 request.go:683] "Waited before sending request" delay="197.16206ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:57.205847  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903" is "Ready"
	I0916 23:57:57.205870  804231 pod_ready.go:86] duration metric: took 398.97003ms for pod "kube-controller-manager-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.205878  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.403223  804231 request.go:683] "Waited before sending request" delay="197.233762ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903-m02"
	I0916 23:57:57.603503  804231 request.go:683] "Waited before sending request" delay="197.308924ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:57.606309  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903-m02" is "Ready"
	I0916 23:57:57.606331  804231 pod_ready.go:86] duration metric: took 400.447455ms for pod "kube-controller-manager-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.606339  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.803572  804231 request.go:683] "Waited before sending request" delay="197.156861ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903-m03"
	I0916 23:57:58.003564  804231 request.go:683] "Waited before sending request" delay="197.308739ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:58.006495  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903-m03" is "Ready"
	I0916 23:57:58.006527  804231 pod_ready.go:86] duration metric: took 400.177209ms for pod "kube-controller-manager-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.203971  804231 request.go:683] "Waited before sending request" delay="197.330656ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0916 23:57:58.207087  804231 pod_ready.go:83] waiting for pod "kube-proxy-58lkb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.403484  804231 request.go:683] "Waited before sending request" delay="196.298118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-58lkb"
	I0916 23:57:58.603727  804231 request.go:683] "Waited before sending request" delay="197.238459ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:58.606561  804231 pod_ready.go:94] pod "kube-proxy-58lkb" is "Ready"
	I0916 23:57:58.606586  804231 pod_ready.go:86] duration metric: took 399.476011ms for pod "kube-proxy-58lkb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.606593  804231 pod_ready.go:83] waiting for pod "kube-proxy-d4m8f" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.804003  804231 request.go:683] "Waited before sending request" delay="197.323847ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d4m8f"
	I0916 23:57:59.003937  804231 request.go:683] "Waited before sending request" delay="197.340178ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:59.006899  804231 pod_ready.go:94] pod "kube-proxy-d4m8f" is "Ready"
	I0916 23:57:59.006927  804231 pod_ready.go:86] duration metric: took 400.327971ms for pod "kube-proxy-d4m8f" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:59.006938  804231 pod_ready.go:83] waiting for pod "kube-proxy-kn6nb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:59.203366  804231 request.go:683] "Waited before sending request" delay="196.341882ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kn6nb"
	I0916 23:57:59.403608  804231 request.go:683] "Waited before sending request" delay="197.193431ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:59.604047  804231 request.go:683] "Waited before sending request" delay="96.244025ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kn6nb"
	I0916 23:57:59.803112  804231 request.go:683] "Waited before sending request" delay="196.282766ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:58:00.203120  804231 request.go:683] "Waited before sending request" delay="192.276334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:58:00.603459  804231 request.go:683] "Waited before sending request" delay="93.218157ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	W0916 23:58:01.014543  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:03.512871  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:06.012965  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:08.512763  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:11.012966  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:13.013166  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:15.512655  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:18.012615  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:20.513188  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:23.012908  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:25.013240  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:27.512733  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:30.012142  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:32.012503  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:34.013070  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:36.512643  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	I0916 23:58:37.014670  804231 pod_ready.go:94] pod "kube-proxy-kn6nb" is "Ready"
	I0916 23:58:37.014697  804231 pod_ready.go:86] duration metric: took 38.007753603s for pod "kube-proxy-kn6nb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.017732  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.022228  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903" is "Ready"
	I0916 23:58:37.022246  804231 pod_ready.go:86] duration metric: took 4.488553ms for pod "kube-scheduler-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.022253  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.026173  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903-m02" is "Ready"
	I0916 23:58:37.026191  804231 pod_ready.go:86] duration metric: took 3.932068ms for pod "kube-scheduler-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.026198  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.030029  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903-m03" is "Ready"
	I0916 23:58:37.030046  804231 pod_ready.go:86] duration metric: took 3.843487ms for pod "kube-scheduler-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.030054  804231 pod_ready.go:40] duration metric: took 42.627839542s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:58:37.073472  804231 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0916 23:58:37.074923  804231 out.go:179] * Done! kubectl is now configured to use "ha-472903" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0a41d8b587e02       8c811b4aec35f       12 minutes ago      Running             busybox                   0                   a2422ee3e6e6d       busybox-7b57f96db7-6hrm6
	f33de265effb1       6e38f40d628db       13 minutes ago      Running             storage-provisioner       1                   1c0713f862ea0       storage-provisioner
	9f103b05d2d6f       52546a367cc9e       13 minutes ago      Running             coredns                   0                   9579263342827       coredns-66bc5c9577-c94hz
	3b457407f10e3       52546a367cc9e       13 minutes ago      Running             coredns                   0                   290cfb537788e       coredns-66bc5c9577-qn8m7
	cc69d2451cb65       409467f978b4a       13 minutes ago      Running             kindnet-cni               0                   3e17d6ae9b2a6       kindnet-lh7dv
	f4767b6363ce9       6e38f40d628db       13 minutes ago      Exited              storage-provisioner       0                   1c0713f862ea0       storage-provisioner
	92dd4d116eb03       df0860106674d       13 minutes ago      Running             kube-proxy                0                   8c0ecd5301326       kube-proxy-d4m8f
	3cb75495f7a54       765655ea60781       13 minutes ago      Running             kube-vip                  0                   4c425da29992d       kube-vip-ha-472903
	bba28cace6502       46169d968e920       13 minutes ago      Running             kube-scheduler            0                   f18dd7697c60f       kube-scheduler-ha-472903
	087290a41f59c       a0af72f2ec6d6       13 minutes ago      Running             kube-controller-manager   0                   0760ebe1d2a56       kube-controller-manager-ha-472903
	0aba62132d764       90550c43ad2bc       13 minutes ago      Running             kube-apiserver            0                   8ad1fa8bc0267       kube-apiserver-ha-472903
	23c0af0bdbe95       5f1f5298c888d       13 minutes ago      Running             etcd                      0                   b01a62742caec       etcd-ha-472903
	
	
	==> containerd <==
	Sep 16 23:57:20 ha-472903 containerd[765]: time="2025-09-16T23:57:20.857383931Z" level=info msg="StartContainer for \"9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315\""
	Sep 16 23:57:20 ha-472903 containerd[765]: time="2025-09-16T23:57:20.915209442Z" level=info msg="StartContainer for \"9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315\" returns successfully"
	Sep 16 23:57:26 ha-472903 containerd[765]: time="2025-09-16T23:57:26.847849669Z" level=info msg="received exit event container_id:\"f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8\"  id:\"f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8\"  pid:2188  exit_status:1  exited_at:{seconds:1758067046  nanos:847300745}"
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084468964Z" level=info msg="shim disconnected" id=f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8 namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084514637Z" level=warning msg="cleaning up after shim disconnected" id=f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8 namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084528446Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.861023305Z" level=info msg="CreateContainer within sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.875038922Z" level=info msg="CreateContainer within sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\""
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.875884762Z" level=info msg="StartContainer for \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\""
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.929708067Z" level=info msg="StartContainer for \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\" returns successfully"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.362974621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-6hrm6,Uid:bd03bad4-af1e-42d0-81fb-6fcaeaa8775e,Namespace:default,Attempt:0,}"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.455106923Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.455480779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-6hrm6,Uid:bd03bad4-af1e-42d0-81fb-6fcaeaa8775e,Namespace:default,Attempt:0,} returns sandbox id \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\""
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.457290181Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.440332779Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.440968214Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.442025332Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.443719507Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.444221405Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 1.986887608s"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.444254598Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.447875079Z" level=info msg="CreateContainer within sandbox \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.457018566Z" level=info msg="CreateContainer within sandbox \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.457508138Z" level=info msg="StartContainer for \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.510633374Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.512731136Z" level=info msg="StartContainer for \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\" returns successfully"
	
	
	==> coredns [3b457407f10e357ce33da7fa3fb4333f8312f0d3e3570cf8528cdcac8f5a1d0f] <==
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47028 - 14410 "HINFO IN 8622750158419892651.814616782938826920. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.021349234s
	[INFO] 10.244.1.2:52581 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000299614s
	[INFO] 10.244.1.2:57899 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.012540337s
	[INFO] 10.244.1.2:54323 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.008980197s
	[INFO] 10.244.1.2:53799 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.009949044s
	[INFO] 10.244.0.4:39485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157098s
	[INFO] 10.244.0.4:57871 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000750185s
	[INFO] 10.244.0.4:53410 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000089028s
	[INFO] 10.244.1.2:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150317s
	[INFO] 10.244.1.2:59346 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028128363s
	[INFO] 10.244.1.2:43091 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01004668s
	[INFO] 10.244.1.2:37227 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000191819s
	[INFO] 10.244.1.2:40079 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125376s
	[INFO] 10.244.0.4:38168 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181114s
	[INFO] 10.244.0.4:60067 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000087147s
	[INFO] 10.244.0.4:47611 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122939s
	[INFO] 10.244.0.4:37626 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121195s
	[INFO] 10.244.1.2:42817 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159509s
	[INFO] 10.244.1.2:33910 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186538s
	[INFO] 10.244.1.2:37929 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109836s
	[INFO] 10.244.0.4:50698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212263s
	[INFO] 10.244.0.4:33166 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100167s
	
	
	==> coredns [9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45239 - 14115 "HINFO IN 5883645869461503498.3950535614037284853. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058516241s
	[INFO] 10.244.1.2:55352 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003252862s
	[INFO] 10.244.0.4:33650 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001640931s
	[INFO] 10.244.0.4:50077 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000621363s
	[INFO] 10.244.1.2:48439 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189187s
	[INFO] 10.244.1.2:39582 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151327s
	[INFO] 10.244.1.2:59539 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140715s
	[INFO] 10.244.0.4:42999 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177514s
	[INFO] 10.244.0.4:36769 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010694753s
	[INFO] 10.244.0.4:53074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158932s
	[INFO] 10.244.0.4:57223 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012213s
	[INFO] 10.244.1.2:50810 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176678s
	[INFO] 10.244.0.4:58045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142445s
	[INFO] 10.244.0.4:39777 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123555s
	
	
	==> describe nodes <==
	Name:               ha-472903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_56_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:56:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:10:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-472903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac22e2ab5b0349cdb9474983aa23278e
	  System UUID:                695af4c7-28fb-4299-9454-75db3262ca2c
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6hrm6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-c94hz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 coredns-66bc5c9577-qn8m7             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 etcd-ha-472903                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-lh7dv                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-472903             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-472903    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-d4m8f                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-472903             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-472903                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           12m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	
	
	Name:               ha-472903-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:10:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-472903-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 4094672df3d84509ae4c88c54f7f5e93
	  System UUID:                85df9db8-f21a-4038-9f8c-4cc1d81dc0d5
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-4jfjt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-472903-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-q7c7s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-472903-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-472903-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-58lkb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-472903-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-472903-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        13m   kube-proxy       
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode  12m   node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	
	
	Name:               ha-472903-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:10:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-472903-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 9964c713c65f4333be8a877aab744040
	  System UUID:                7eb7f2ee-a32d-4876-a4ad-58f745b9c377
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-mknzs                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-472903-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-x6twd                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-472903-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-472903-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kn6nb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-472903-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-472903-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  12m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode  12m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode  12m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 e8 75 4b 01 57 08 06
	[  +0.025562] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[ +13.150028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 5c f0 26 cd ba 08 06
	[  +0.000341] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 20 90 fb f5 d8 08 06
	[ +28.639349] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 26 63 8d db 90 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[  +0.836892] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 cc 9b 52 38 94 08 06
	[  +0.080327] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	[Sep16 23:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[ +20.325550] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 39 4b 41 df 63 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[  +8.925776] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e cd c1 f7 dc c8 08 06
	[  +0.000373] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	
	
	==> etcd [23c0af0bdbe9526d53769461ed9f80d8c743b02e625b65cce39c888f5e7d4b4e] <==
	{"level":"info","ts":"2025-09-16T23:57:38.284368Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.288146Z","caller":"etcdserver/server.go:1838","msg":"sending merged snapshot","from":"aec36adc501070cc","to":"ab9d0391dce79465","bytes":1356737,"size":"1.4 MB"}
	{"level":"info","ts":"2025-09-16T23:57:38.288252Z","caller":"rafthttp/snapshot_sender.go:82","msg":"sending database snapshot","snapshot-index":679,"remote-peer-id":"ab9d0391dce79465","bytes":1356737,"size":"1.4 MB"}
	{"level":"info","ts":"2025-09-16T23:57:38.293060Z","caller":"etcdserver/snapshot_merge.go:64","msg":"sent database snapshot to writer","bytes":1347584,"size":"1.3 MB"}
	{"level":"info","ts":"2025-09-16T23:57:38.299128Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":679,"remote-peer-id":"ab9d0391dce79465","bytes":1356737,"size":"1.4 MB"}
	{"level":"info","ts":"2025-09-16T23:57:38.314973Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.321619Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"ab9d0391dce79465","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-16T23:57:38.321647Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.321659Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.321995Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.324746Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"ab9d0391dce79465","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-09-16T23:57:38.324782Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.324796Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-16T23:57:38.539376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:45372","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:57:38.542781Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(4226730353838347643 12366044076840555621 12593026477526642892)"}
	{"level":"info","ts":"2025-09-16T23:57:38.542928Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.542988Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:40.311787Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"3aa85cdcd5e5557b","bytes":876533,"size":"876 kB","took":"30.009467109s"}
	{"level":"info","ts":"2025-09-16T23:57:47.400606Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:57:51.874557Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:06.103123Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:08.299219Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"ab9d0391dce79465","bytes":1356737,"size":"1.4 MB","took":"30.011071692s"}
	{"level":"info","ts":"2025-09-17T00:06:46.502551Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1554}
	{"level":"info","ts":"2025-09-17T00:06:46.523688Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1554,"took":"20.616779ms","hash":4277915431,"current-db-size-bytes":3936256,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2105344,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-09-17T00:06:46.523839Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4277915431,"revision":1554,"compact-revision":-1}
	
	
	==> kernel <==
	 00:10:44 up  2:53,  0 users,  load average: 0.15, 0.34, 0.80
	Linux ha-472903 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [cc69d2451cb65860b5bc78e027be2fc1cb0f9fa6542b4abe3bc1ff1c90a8fe60] <==
	I0917 00:09:57.503751       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:10:07.510274       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:07.510320       1 main.go:301] handling current node
	I0917 00:10:07.510336       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:10:07.510341       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:10:07.510554       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:10:07.510567       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:10:17.512521       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:17.512563       1 main.go:301] handling current node
	I0917 00:10:17.512582       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:10:17.512589       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:10:17.512785       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:10:17.512800       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:10:27.511383       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:27.511448       1 main.go:301] handling current node
	I0917 00:10:27.511469       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:10:27.511476       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:10:27.511660       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:10:27.511671       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:10:37.506147       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:37.506186       1 main.go:301] handling current node
	I0917 00:10:37.506204       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:10:37.506209       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:10:37.506448       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:10:37.506459       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0aba62132d764965d8e1a80a4a6345bb7e34892b23143da4a7af3450cd465d6c] <==
	I0917 00:02:08.464547       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:14.110452       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:03:20.793210       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:03:22.342952       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:04:24.690127       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:04:42.485311       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:05:30.551003       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:06:06.800617       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:06:32.710262       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:06:47.441344       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:07:34.732036       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:07:42.022448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:08:46.236959       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:08:51.159386       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:52.603432       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:53.014406       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0917 00:10:41.954540       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37534: use of closed network connection
	E0917 00:10:42.122977       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37556: use of closed network connection
	E0917 00:10:42.250606       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37572: use of closed network connection
	E0917 00:10:42.442469       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37584: use of closed network connection
	E0917 00:10:42.605380       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37602: use of closed network connection
	E0917 00:10:42.730284       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37612: use of closed network connection
	E0917 00:10:42.884291       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37626: use of closed network connection
	E0917 00:10:43.036952       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37644: use of closed network connection
	E0917 00:10:43.161098       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37658: use of closed network connection
	
	
	==> kube-controller-manager [087290a41f59caa4f9bc89759bcec6cf90f47c8a2ab83b7c671a8fff35641df9] <==
	I0916 23:56:54.728442       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0916 23:56:54.728466       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:56:54.728485       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0916 23:56:54.728644       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0916 23:56:54.728665       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0916 23:56:54.728648       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0916 23:56:54.728914       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0916 23:56:54.730175       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0916 23:56:54.730201       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0916 23:56:54.732432       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:56:54.733452       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:56:54.735655       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:56:54.735714       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:56:54.735760       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:56:54.735767       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:56:54.735772       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:56:54.740680       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903" podCIDRs=["10.244.0.0/24"]
	I0916 23:56:54.749950       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:22.933124       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m02\" does not exist"
	I0916 23:57:22.943785       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:24.681339       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m02"
	I0916 23:57:51.749676       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m03\" does not exist"
	I0916 23:57:51.772476       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m03" podCIDRs=["10.244.2.0/24"]
	E0916 23:57:51.829801       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"3f5da9fc-6769-4ca8-a715-edeace44c646\", ResourceVersion:\"594\", Generation:1, CreationTimestamp:time.Date(2025, time.September, 16, 23, 56, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00222d0e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"
\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSourc
e)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0021ed7c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdcf8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtu
alDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.34.0\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00144a7e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Re
sourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Life
cycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0019549c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001900b18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ba1200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", Tole
rationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e14570)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001900b70)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailab
le:2, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:57:54.685322       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m03"
	
	
	==> kube-proxy [92dd4d116eb0387dded82fb32d35690ec2d00e3f5e7ac81bf7aea0c6814edd5e] <==
	I0916 23:56:56.831012       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:56:56.891635       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:56:56.991820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:56:56.991862       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:56:56.991952       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:56:57.015955       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:56:57.016001       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:56:57.021120       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:56:57.021457       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:56:57.021499       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:56:57.024872       1 config.go:200] "Starting service config controller"
	I0916 23:56:57.024892       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:56:57.024900       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:56:57.024909       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:56:57.024890       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:56:57.024917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:56:57.024937       1 config.go:309] "Starting node config controller"
	I0916 23:56:57.024942       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:56:57.125608       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:56:57.125691       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0916 23:56:57.125856       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:56:57.125902       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [bba28cace6502de93aa43db4fb51671581c5074990dea721d98d36d839734a67] <==
	E0916 23:56:48.619869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:56:48.649766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:56:48.673092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I0916 23:56:49.170967       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 23:57:51.780040       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:57:51.780142       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	E0916 23:57:51.780183       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	I0916 23:57:51.782132       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:58:37.948695       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	E0916 23:58:37.948846       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 565a634f-ab41-4776-ba5d-63a601bfec48(default/busybox-7b57f96db7-x6xc9) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	E0916 23:58:37.948875       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	I0916 23:58:37.950251       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	I0916 23:58:37.966099       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="47b06c15-c007-4c50-a248-5411a0f4b6a7" pod="default/busybox-7b57f96db7-4jfjt" assumedNode="ha-472903-m02" currentNode="ha-472903"
	E0916 23:58:37.968241       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903"
	E0916 23:58:37.968351       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 47b06c15-c007-4c50-a248-5411a0f4b6a7(default/busybox-7b57f96db7-4jfjt) was assumed on ha-472903 but assigned to ha-472903-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	E0916 23:58:37.968376       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	I0916 23:58:37.969472       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903-m02"
	E0916 23:58:38.002469       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-wp95z" node="ha-472903"
	E0916 23:58:38.002779       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:38.046394       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-xnrsc\" not found" pod="default/busybox-7b57f96db7-xnrsc"
	E0916 23:58:38.046880       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-wp95z\" not found" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:40.050124       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	E0916 23:58:40.050213       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod bd03bad4-af1e-42d0-81fb-6fcaeaa8775e(default/busybox-7b57f96db7-6hrm6) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	E0916 23:58:40.050248       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	I0916 23:58:40.051853       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	
	
	==> kubelet <==
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.235025    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62 podName:cc7a8d10-408f-4655-ac70-54b4af22d9eb nodeName:}" failed. No retries permitted until 2025-09-16 23:58:38.735007966 +0000 UTC m=+109.066439678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hrb62" (UniqueName: "kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62") pod "busybox-7b57f96db7-5pwbb" (UID: "cc7a8d10-408f-4655-ac70-54b4af22d9eb") : failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737179    1676 projected.go:196] Error preparing data for projected volume kube-api-access-xrpwc for pod default/busybox-7b57f96db7-xj7ks: failed to fetch token: pod "busybox-7b57f96db7-xj7ks" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737266    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc podName:cac915f6-7630-4320-b6d2-fd18f3c19a17 nodeName:}" failed. No retries permitted until 2025-09-16 23:58:39.737245356 +0000 UTC m=+110.068677057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xrpwc" (UniqueName: "kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc") pod "busybox-7b57f96db7-xj7ks" (UID: "cac915f6-7630-4320-b6d2-fd18f3c19a17") : failed to fetch token: pod "busybox-7b57f96db7-xj7ks" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737179    1676 projected.go:196] Error preparing data for projected volume kube-api-access-hrb62 for pod default/busybox-7b57f96db7-5pwbb: failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737371    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62 podName:cc7a8d10-408f-4655-ac70-54b4af22d9eb nodeName:}" failed. No retries permitted until 2025-09-16 23:58:39.737351933 +0000 UTC m=+110.068783647 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hrb62" (UniqueName: "kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62") pod "busybox-7b57f96db7-5pwbb" (UID: "cc7a8d10-408f-4655-ac70-54b4af22d9eb") : failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.027158    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.028111    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.039445    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.042381    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138755    1676 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9njqf\" (UniqueName: \"kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf\") pod \"59b9a23c-498d-4802-9790-70931c4a2c06\" (UID: \"59b9a23c-498d-4802-9790-70931c4a2c06\") "
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138821    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hrb62\" (UniqueName: \"kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138836    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xrpwc\" (UniqueName: \"kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.140952    1676 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf" (OuterVolumeSpecName: "kube-api-access-9njqf") pod "59b9a23c-498d-4802-9790-70931c4a2c06" (UID: "59b9a23c-498d-4802-9790-70931c4a2c06"). InnerVolumeSpecName "kube-api-access-9njqf". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.239025    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9njqf\" (UniqueName: \"kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.752137    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.753199    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.754268    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" path="/var/lib/kubelet/pods/cac915f6-7630-4320-b6d2-fd18f3c19a17/volumes"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.754475    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" path="/var/lib/kubelet/pods/cc7a8d10-408f-4655-ac70-54b4af22d9eb/volumes"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.056772    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.057611    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.059208    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.060512    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: I0916 23:58:40.145054    1676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjkrp\" (UniqueName: \"kubernetes.io/projected/bd03bad4-af1e-42d0-81fb-6fcaeaa8775e-kube-api-access-pjkrp\") pod \"busybox-7b57f96db7-6hrm6\" (UID: \"bd03bad4-af1e-42d0-81fb-6fcaeaa8775e\") " pod="default/busybox-7b57f96db7-6hrm6"
	Sep 16 23:58:41 ha-472903 kubelet[1676]: I0916 23:58:41.754549    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59b9a23c-498d-4802-9790-70931c4a2c06" path="/var/lib/kubelet/pods/59b9a23c-498d-4802-9790-70931c4a2c06/volumes"
	Sep 16 23:58:43 ha-472903 kubelet[1676]: I0916 23:58:43.049200    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7b57f96db7-6hrm6" podStartSLOduration=3.061025393 podStartE2EDuration="5.049179166s" podCreationTimestamp="2025-09-16 23:58:38 +0000 UTC" firstStartedPulling="2025-09-16 23:58:40.45690156 +0000 UTC m=+110.788333264" lastFinishedPulling="2025-09-16 23:58:42.445055322 +0000 UTC m=+112.776487037" observedRunningTime="2025-09-16 23:58:43.049092106 +0000 UTC m=+113.380523828" watchObservedRunningTime="2025-09-16 23:58:43.049179166 +0000 UTC m=+113.380610888"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-472903 -n ha-472903
helpers_test.go:269: (dbg) Run:  kubectl --context ha-472903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-mknzs
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DeployApp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-472903 describe pod busybox-7b57f96db7-mknzs
helpers_test.go:290: (dbg) kubectl --context ha-472903 describe pod busybox-7b57f96db7-mknzs:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-mknzs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ha-472903-m03/192.168.49.4
	Start Time:       Tue, 16 Sep 2025 23:58:37 +0000
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmz92 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gmz92:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                  Age                  From               Message
	  ----     ------                  ----                 ----               -------
	  Warning  FailedScheduling        12m                  default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-mknzs": pod busybox-7b57f96db7-mknzs is already assigned to node "ha-472903-m03"
	  Warning  FailedScheduling        12m                  default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-mknzs": pod busybox-7b57f96db7-mknzs is already assigned to node "ha-472903-m03"
	  Normal   Scheduled               12m                  default-scheduler  Successfully assigned default/busybox-7b57f96db7-mknzs to ha-472903-m03
	  Warning  FailedCreatePodSandBox  12m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "72439adc47052c2da00cee62587d780275cf6c2423dee9831567464d4725ee9d": failed to find network info for sandbox "72439adc47052c2da00cee62587d780275cf6c2423dee9831567464d4725ee9d"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "24ab8b6bd2f38653d2326c375fc81ebf17317e36885547c7b42c011bb95889ed": failed to find network info for sandbox "24ab8b6bd2f38653d2326c375fc81ebf17317e36885547c7b42c011bb95889ed"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "300fece4c100bc3e68a19e1fa6f46c8a378753727caaaeb1533dab71f234be58": failed to find network info for sandbox "300fece4c100bc3e68a19e1fa6f46c8a378753727caaaeb1533dab71f234be58"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e49a14b4de5e24fa450a43c124b2916ad7028d35cbc3b0f74595e68ee161d1d0": failed to find network info for sandbox "e49a14b4de5e24fa450a43c124b2916ad7028d35cbc3b0f74595e68ee161d1d0"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "efa290ca498f7c70ae29d8d97709edda97bc6b062aac05a3ef6d6a83fbd42797": failed to find network info for sandbox "efa290ca498f7c70ae29d8d97709edda97bc6b062aac05a3ef6d6a83fbd42797"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d5851ce1270b1c8994400ecd7bdabadaf895488957ffb5173dcd7e289db1de6c": failed to find network info for sandbox "d5851ce1270b1c8994400ecd7bdabadaf895488957ffb5173dcd7e289db1de6c"
	  Warning  FailedCreatePodSandBox  10m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "11aaa894ae434b08da8122c8f3445d03b4c1e54dfb071596f63a0e4654f49f10": failed to find network info for sandbox "11aaa894ae434b08da8122c8f3445d03b4c1e54dfb071596f63a0e4654f49f10"
	  Warning  FailedCreatePodSandBox  10m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c8126e80126ff891a4935c60cfec55753f6bb51d789c0eb46098b72267c7d53c": failed to find network info for sandbox "c8126e80126ff891a4935c60cfec55753f6bb51d789c0eb46098b72267c7d53c"
	  Warning  FailedCreatePodSandBox  10m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1389a2f92f350a6f495c76f80031300b6442a6a0cc67abd4b045ff9150b3fc3a": failed to find network info for sandbox "1389a2f92f350a6f495c76f80031300b6442a6a0cc67abd4b045ff9150b3fc3a"
	  Warning  FailedCreatePodSandBox  2m3s (x38 over 10m)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c3a9afe91461f3ea405980387ac5fab85785c7cf3f180d2b0f894e1df94ca62d": failed to find network info for sandbox "c3a9afe91461f3ea405980387ac5fab85785c7cf3f180d2b0f894e1df94ca62d"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (727.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (116.327536ms)

                                                
                                                
** stderr ** 
	error: Internal error occurred: unable to upgrade connection: container not found ("busybox")

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-7b57f96db7-mknzs could not resolve 'host.minikube.internal': exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-472903
helpers_test.go:243: (dbg) docker inspect ha-472903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	        "Created": "2025-09-16T23:56:35.178831158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 804802,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:56:35.209552026Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hostname",
	        "HostsPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hosts",
	        "LogPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047-json.log",
	        "Name": "/ha-472903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-472903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-472903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	                "LowerDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-472903",
	                "Source": "/var/lib/docker/volumes/ha-472903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-472903",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-472903",
	                "name.minikube.sigs.k8s.io": "ha-472903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "abe382ce28757e80b5cdae91a64217d3672b21c23f3517480bd53105aeca147e",
	            "SandboxKey": "/var/run/docker/netns/abe382ce2875",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33544"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33545"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33548"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33546"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33547"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-472903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:42:9f:f6:50:c2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "22d49b2f397dfabc2a3967bd54b05204a52976e683f65ff07bff00e793040bef",
	                    "EndpointID": "4d4d83129a167c8183e8ef58cc6057f613d8d69adf59710ba6c623d1ff2970c6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-472903",
	                        "05f03528ecc5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-472903 -n ha-472903
helpers_test.go:252: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-472903 logs -n 25: (1.105749058s)
helpers_test.go:260: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:08 UTC │ 17 Sep 25 00:08 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:08 UTC │ 17 Sep 25 00:08 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:08 UTC │ 17 Sep 25 00:08 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:08 UTC │ 17 Sep 25 00:08 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:09 UTC │ 17 Sep 25 00:09 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:09 UTC │ 17 Sep 25 00:09 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:09 UTC │ 17 Sep 25 00:09 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:09 UTC │ 17 Sep 25 00:09 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- nslookup kubernetes.io                                              │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- nslookup kubernetes.io                                              │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- nslookup kubernetes.io                                              │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │                     │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- nslookup kubernetes.default                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- nslookup kubernetes.default                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- nslookup kubernetes.default                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │                     │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- nslookup kubernetes.default.svc.cluster.local                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- nslookup kubernetes.default.svc.cluster.local                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- nslookup kubernetes.default.svc.cluster.local                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │                     │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- sh -c ping -c 1 192.168.49.1                                        │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- sh -c ping -c 1 192.168.49.1                                        │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:56:30
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:56:30.301112  804231 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:56:30.301322  804231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:30.301330  804231 out.go:374] Setting ErrFile to fd 2...
	I0916 23:56:30.301335  804231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:30.301535  804231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0916 23:56:30.302024  804231 out.go:368] Setting JSON to false
	I0916 23:56:30.302925  804231 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9532,"bootTime":1758057458,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:56:30.303027  804231 start.go:140] virtualization: kvm guest
	I0916 23:56:30.304965  804231 out.go:179] * [ha-472903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:56:30.306181  804231 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:56:30.306189  804231 notify.go:220] Checking for updates...
	I0916 23:56:30.308309  804231 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:56:30.309530  804231 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0916 23:56:30.310577  804231 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0916 23:56:30.311523  804231 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:56:30.312490  804231 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:56:30.313634  804231 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:56:30.336203  804231 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:56:30.336330  804231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:30.390690  804231 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:56:30.380521507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:30.390801  804231 docker.go:318] overlay module found
	I0916 23:56:30.392435  804231 out.go:179] * Using the docker driver based on user configuration
	I0916 23:56:30.393493  804231 start.go:304] selected driver: docker
	I0916 23:56:30.393505  804231 start.go:918] validating driver "docker" against <nil>
	I0916 23:56:30.393517  804231 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:56:30.394092  804231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:30.448140  804231 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:56:30.438500908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:30.448302  804231 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:56:30.448529  804231 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:56:30.450143  804231 out.go:179] * Using Docker driver with root privileges
	I0916 23:56:30.451156  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:30.451216  804231 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 23:56:30.451226  804231 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:56:30.451301  804231 start.go:348] cluster config:
	{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m
0s}
	I0916 23:56:30.452491  804231 out.go:179] * Starting "ha-472903" primary control-plane node in "ha-472903" cluster
	I0916 23:56:30.453469  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:56:30.454617  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:30.455626  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:30.455658  804231 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0916 23:56:30.455669  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:30.455737  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:30.455747  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:30.455875  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:56:30.456208  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:30.456245  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json: {Name:mkb16495f6ef626fa58a9600f3b4a943b5aaf14d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:30.475568  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:30.475587  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:30.475611  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:30.475644  804231 start.go:360] acquireMachinesLock for ha-472903: {Name:mk994658ce3314f2aed1dec341debc49d36a4326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:30.475759  804231 start.go:364] duration metric: took 97.738µs to acquireMachinesLock for "ha-472903"
	I0916 23:56:30.475786  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:30.475881  804231 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:56:30.477680  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:30.477953  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:56:30.477986  804231 client.go:168] LocalClient.Create starting
	I0916 23:56:30.478060  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:56:30.478097  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:30.478118  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:30.478203  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:56:30.478234  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:30.478247  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:30.478706  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:56:30.494743  804231 cli_runner.go:211] docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:56:30.494806  804231 network_create.go:284] running [docker network inspect ha-472903] to gather additional debugging logs...
	I0916 23:56:30.494829  804231 cli_runner.go:164] Run: docker network inspect ha-472903
	W0916 23:56:30.510851  804231 cli_runner.go:211] docker network inspect ha-472903 returned with exit code 1
	I0916 23:56:30.510886  804231 network_create.go:287] error running [docker network inspect ha-472903]: docker network inspect ha-472903: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-472903 not found
	I0916 23:56:30.510919  804231 network_create.go:289] output of [docker network inspect ha-472903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-472903 not found
	
	** /stderr **
	I0916 23:56:30.511007  804231 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:30.527272  804231 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b12870}
	I0916 23:56:30.527312  804231 network_create.go:124] attempt to create docker network ha-472903 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:56:30.527357  804231 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-472903 ha-472903
	I0916 23:56:30.581246  804231 network_create.go:108] docker network ha-472903 192.168.49.0/24 created
	I0916 23:56:30.581278  804231 kic.go:121] calculated static IP "192.168.49.2" for the "ha-472903" container
	I0916 23:56:30.581331  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:30.597113  804231 cli_runner.go:164] Run: docker volume create ha-472903 --label name.minikube.sigs.k8s.io=ha-472903 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:30.614615  804231 oci.go:103] Successfully created a docker volume ha-472903
	I0916 23:56:30.614694  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903 --entrypoint /usr/bin/test -v ha-472903:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:30.983301  804231 oci.go:107] Successfully prepared a docker volume ha-472903
	I0916 23:56:30.983346  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:30.983369  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:30.983457  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:56:35.109877  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.126378793s)
	I0916 23:56:35.109930  804231 kic.go:203] duration metric: took 4.126557088s to extract preloaded images to volume ...
	W0916 23:56:35.110010  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:56:35.110041  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:56:35.110081  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:56:35.162423  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903 --name ha-472903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903 --network ha-472903 --ip 192.168.49.2 --volume ha-472903:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:56:35.411448  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Running}}
	I0916 23:56:35.428877  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.447492  804231 cli_runner.go:164] Run: docker exec ha-472903 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:56:35.490145  804231 oci.go:144] the created container "ha-472903" has a running status.
	I0916 23:56:35.490177  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa...
	I0916 23:56:35.748917  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:56:35.748974  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:56:35.776040  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.795374  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:56:35.795403  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:56:35.841194  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.859165  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:56:35.859278  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:35.877348  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:35.877637  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:35.877654  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:56:36.014327  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0916 23:56:36.014356  804231 ubuntu.go:182] provisioning hostname "ha-472903"
	I0916 23:56:36.014430  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.033295  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:36.033543  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:36.033558  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903 && echo "ha-472903" | sudo tee /etc/hostname
	I0916 23:56:36.178557  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0916 23:56:36.178627  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.196584  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:36.196791  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:36.196814  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:56:36.331895  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:56:36.331954  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:56:36.331987  804231 ubuntu.go:190] setting up certificates
	I0916 23:56:36.332000  804231 provision.go:84] configureAuth start
	I0916 23:56:36.332062  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.350923  804231 provision.go:143] copyHostCerts
	I0916 23:56:36.350968  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:56:36.351011  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:56:36.351021  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:56:36.351100  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:56:36.351216  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:56:36.351254  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:56:36.351265  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:56:36.351307  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:56:36.351374  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:56:36.351400  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:56:36.351409  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:56:36.351461  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:56:36.351538  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903 san=[127.0.0.1 192.168.49.2 ha-472903 localhost minikube]
	I0916 23:56:36.406870  804231 provision.go:177] copyRemoteCerts
	I0916 23:56:36.406927  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:56:36.406977  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.424064  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.520663  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:56:36.520737  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:56:36.546100  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:56:36.546162  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 23:56:36.569886  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:56:36.569946  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:56:36.593694  804231 provision.go:87] duration metric: took 261.676108ms to configureAuth
	I0916 23:56:36.593725  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:56:36.593891  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:36.593903  804231 machine.go:96] duration metric: took 734.71199ms to provisionDockerMachine
	I0916 23:56:36.593911  804231 client.go:171] duration metric: took 6.115914604s to LocalClient.Create
	I0916 23:56:36.593933  804231 start.go:167] duration metric: took 6.115991162s to libmachine.API.Create "ha-472903"
	I0916 23:56:36.593942  804231 start.go:293] postStartSetup for "ha-472903" (driver="docker")
	I0916 23:56:36.593950  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:56:36.593994  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:56:36.594038  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.611127  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.708294  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:56:36.711629  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:56:36.711662  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:56:36.711669  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:56:36.711677  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:56:36.711690  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:56:36.711734  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:56:36.711817  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:56:36.711829  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:56:36.711917  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:56:36.720521  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:56:36.746614  804231 start.go:296] duration metric: took 152.657806ms for postStartSetup
	I0916 23:56:36.746970  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.763912  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:36.764159  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:56:36.764204  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.781099  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.872372  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:56:36.876670  804231 start.go:128] duration metric: took 6.400768235s to createHost
	I0916 23:56:36.876701  804231 start.go:83] releasing machines lock for "ha-472903", held for 6.400928988s
	I0916 23:56:36.876787  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.894080  804231 ssh_runner.go:195] Run: cat /version.json
	I0916 23:56:36.894094  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:56:36.894141  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.894182  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.912628  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.913001  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:37.079386  804231 ssh_runner.go:195] Run: systemctl --version
	I0916 23:56:37.084104  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:56:37.088563  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:56:37.116786  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:56:37.116846  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:56:37.142716  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:56:37.142738  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:56:37.142772  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:56:37.142832  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:56:37.154693  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:56:37.165920  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:56:37.165978  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:56:37.179227  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:56:37.192751  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:56:37.255915  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:56:37.324761  804231 docker.go:234] disabling docker service ...
	I0916 23:56:37.324836  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:56:37.342233  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:56:37.353324  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:56:37.420555  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:56:37.486396  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:56:37.497453  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:56:37.513435  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:56:37.524399  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:56:37.534072  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:56:37.534132  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:56:37.543872  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:56:37.553478  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:56:37.562918  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:56:37.572431  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:56:37.581176  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:56:37.590540  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:56:37.599825  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:56:37.609340  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:56:37.617500  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:56:37.625771  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:56:37.685687  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:56:37.787201  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:56:37.787275  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:56:37.791126  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:56:37.791200  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:56:37.794684  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:56:37.828753  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:56:37.828806  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:56:37.851610  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:56:37.876577  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:56:37.877711  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:37.894044  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:56:37.897995  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:56:37.909702  804231 kubeadm.go:875] updating cluster {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:56:37.909830  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:37.909936  804231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:56:37.943964  804231 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 23:56:37.943985  804231 containerd.go:534] Images already preloaded, skipping extraction
	I0916 23:56:37.944040  804231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:56:37.976374  804231 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 23:56:37.976397  804231 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:56:37.976405  804231 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0916 23:56:37.976525  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:56:37.976590  804231 ssh_runner.go:195] Run: sudo crictl info
	I0916 23:56:38.009585  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:38.009608  804231 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:56:38.009620  804231 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:56:38.009642  804231 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-472903 NodeName:ha-472903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:56:38.009740  804231 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-472903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:56:38.009763  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:56:38.009799  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:56:38.022796  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:56:38.022978  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:56:38.023041  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:56:38.032162  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:56:38.032241  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 23:56:38.040936  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 23:56:38.058672  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:56:38.079097  804231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0916 23:56:38.097183  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0916 23:56:38.116629  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:56:38.120221  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:56:38.131205  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:56:38.195735  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:56:38.216649  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.2
	I0916 23:56:38.216671  804231 certs.go:194] generating shared ca certs ...
	I0916 23:56:38.216692  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.216854  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:56:38.216907  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:56:38.216920  804231 certs.go:256] generating profile certs ...
	I0916 23:56:38.216989  804231 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:56:38.217007  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt with IP's: []
	I0916 23:56:38.286683  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt ...
	I0916 23:56:38.286713  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt: {Name:mk764ef4ac73429cea14d799835f3822d8afb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.286876  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key ...
	I0916 23:56:38.286887  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key: {Name:mk988f40b7ad20c61b4ffc19afd15eea50787a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.286965  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8
	I0916 23:56:38.286981  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0916 23:56:38.411782  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 ...
	I0916 23:56:38.411812  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8: {Name:mkbca9fcc4cd73eb913b43ef67240975ba048601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.411977  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8 ...
	I0916 23:56:38.411990  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8: {Name:mk56f7fb29011c6372caaf96dfdbcab1b202e8b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.412061  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:56:38.412138  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:56:38.412190  804231 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:56:38.412204  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt with IP's: []
	I0916 23:56:38.735728  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt ...
	I0916 23:56:38.735759  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt: {Name:mke25602938652bbe51197bb8e5738dfc5dca50b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.735935  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key ...
	I0916 23:56:38.735947  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key: {Name:mkc7d616357a8be8181d43ca8cb33ab512ce94dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.736027  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:56:38.736044  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:56:38.736055  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:56:38.736068  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:56:38.736078  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:56:38.736090  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:56:38.736105  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:56:38.736115  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:56:38.736175  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:56:38.736210  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:56:38.736218  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:56:38.736242  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:56:38.736266  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:56:38.736284  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:56:38.736322  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:56:38.736347  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:56:38.736360  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:38.736372  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:56:38.736905  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:56:38.762142  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:56:38.786590  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:56:38.810694  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:56:38.834521  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 23:56:38.858677  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:56:38.881975  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:56:38.906146  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:56:38.929698  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:56:38.955154  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:56:38.978551  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:56:39.001782  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:56:39.019405  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:56:39.024868  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:56:39.034165  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.038348  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.038407  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.045172  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:56:39.054735  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:56:39.065180  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.068976  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.069038  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.075920  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:56:39.085838  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:56:39.095394  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.098966  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.099019  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.105643  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:56:39.114800  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:56:39.117988  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:56:39.118033  804231 kubeadm.go:392] StartCluster: {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:56:39.118097  804231 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 23:56:39.118132  804231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 23:56:39.154291  804231 cri.go:89] found id: ""
	I0916 23:56:39.154361  804231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:56:39.163485  804231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:56:39.172454  804231 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:56:39.172499  804231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:56:39.181066  804231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:56:39.181098  804231 kubeadm.go:157] found existing configuration files:
	
	I0916 23:56:39.181131  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:56:39.189824  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:56:39.189873  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:56:39.198165  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:56:39.206772  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:56:39.206819  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:56:39.215119  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:56:39.223660  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:56:39.223717  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:56:39.232099  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:56:39.240514  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:56:39.240559  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:56:39.248850  804231 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:56:39.285897  804231 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:56:39.285950  804231 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:56:39.300660  804231 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:56:39.300727  804231 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:56:39.300801  804231 kubeadm.go:310] OS: Linux
	I0916 23:56:39.300901  804231 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:56:39.300975  804231 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:56:39.301037  804231 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:56:39.301080  804231 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:56:39.301127  804231 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:56:39.301169  804231 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:56:39.301211  804231 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:56:39.301268  804231 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:56:39.351787  804231 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:56:39.351909  804231 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:56:39.351995  804231 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:56:39.358062  804231 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:56:39.360794  804231 out.go:252]   - Generating certificates and keys ...
	I0916 23:56:39.360906  804231 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:56:39.360984  804231 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:56:39.805287  804231 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:56:40.002708  804231 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:56:40.279763  804231 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:56:40.813028  804231 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:56:41.074848  804231 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:56:41.075343  804231 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-472903 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:56:41.124880  804231 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:56:41.125041  804231 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-472903 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:56:41.707716  804231 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:56:42.089212  804231 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:56:42.627038  804231 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:56:42.627119  804231 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:56:42.823901  804231 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:56:43.022989  804231 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:56:43.163778  804231 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:56:43.708743  804231 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:56:44.024642  804231 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:56:44.025130  804231 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:56:44.027319  804231 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:56:44.029599  804231 out.go:252]   - Booting up control plane ...
	I0916 23:56:44.029737  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:56:44.029842  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:56:44.030181  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:56:44.039957  804231 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:56:44.040118  804231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:56:44.047794  804231 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:56:44.048177  804231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:56:44.048269  804231 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:56:44.122629  804231 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:56:44.122739  804231 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:56:45.124352  804231 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001822735s
	I0916 23:56:45.127338  804231 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:56:45.127477  804231 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:56:45.127582  804231 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:56:45.127694  804231 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:56:47.478256  804231 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.350892202s
	I0916 23:56:47.717698  804231 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.590223043s
	I0916 23:56:49.129161  804231 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001748341s
	I0916 23:56:49.140036  804231 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:56:49.148779  804231 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:56:49.158010  804231 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:56:49.158279  804231 kubeadm.go:310] [mark-control-plane] Marking the node ha-472903 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:56:49.165085  804231 kubeadm.go:310] [bootstrap-token] Using token: 4apri1.yqe8ok7wc4ltba21
	I0916 23:56:49.166180  804231 out.go:252]   - Configuring RBAC rules ...
	I0916 23:56:49.166328  804231 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:56:49.169225  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:56:49.174527  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:56:49.176741  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:56:49.178892  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:56:49.181107  804231 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:56:49.534440  804231 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:56:49.948567  804231 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:56:50.534581  804231 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:56:50.535429  804231 kubeadm.go:310] 
	I0916 23:56:50.535529  804231 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:56:50.535542  804231 kubeadm.go:310] 
	I0916 23:56:50.535650  804231 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:56:50.535660  804231 kubeadm.go:310] 
	I0916 23:56:50.535696  804231 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:56:50.535801  804231 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:56:50.535858  804231 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:56:50.535872  804231 kubeadm.go:310] 
	I0916 23:56:50.535940  804231 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:56:50.535949  804231 kubeadm.go:310] 
	I0916 23:56:50.536027  804231 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:56:50.536037  804231 kubeadm.go:310] 
	I0916 23:56:50.536125  804231 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:56:50.536212  804231 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:56:50.536280  804231 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:56:50.536286  804231 kubeadm.go:310] 
	I0916 23:56:50.536356  804231 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:56:50.536441  804231 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:56:50.536448  804231 kubeadm.go:310] 
	I0916 23:56:50.536543  804231 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4apri1.yqe8ok7wc4ltba21 \
	I0916 23:56:50.536688  804231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 \
	I0916 23:56:50.536722  804231 kubeadm.go:310] 	--control-plane 
	I0916 23:56:50.536731  804231 kubeadm.go:310] 
	I0916 23:56:50.536842  804231 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:56:50.536857  804231 kubeadm.go:310] 
	I0916 23:56:50.536947  804231 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4apri1.yqe8ok7wc4ltba21 \
	I0916 23:56:50.537084  804231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 
	I0916 23:56:50.539097  804231 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:56:50.539238  804231 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:56:50.539264  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:50.539274  804231 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:56:50.540523  804231 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:56:50.541480  804231 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:56:50.545518  804231 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:56:50.545534  804231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:56:50.563251  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:56:50.762002  804231 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:56:50.762092  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:50.762127  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903 minikube.k8s.io/updated_at=2025_09_16T23_56_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=true
	I0916 23:56:50.771679  804231 ops.go:34] apiserver oom_adj: -16
	I0916 23:56:50.843646  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:51.344428  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:51.844440  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:52.344316  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:52.844594  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:53.343854  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:53.844615  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:54.344057  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:54.844066  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.344374  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.844478  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.927027  804231 kubeadm.go:1105] duration metric: took 5.165002596s to wait for elevateKubeSystemPrivileges
	I0916 23:56:55.927062  804231 kubeadm.go:394] duration metric: took 16.809033965s to StartCluster
	I0916 23:56:55.927081  804231 settings.go:142] acquiring lock: {Name:mk6c1a5bee23e141aad5180323c16c47ed580ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:55.927146  804231 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0916 23:56:55.927785  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:55.928026  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:56:55.928018  804231 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:55.928038  804231 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 23:56:55.928103  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:56:55.928121  804231 addons.go:69] Setting default-storageclass=true in profile "ha-472903"
	I0916 23:56:55.928148  804231 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-472903"
	I0916 23:56:55.928126  804231 addons.go:69] Setting storage-provisioner=true in profile "ha-472903"
	I0916 23:56:55.928222  804231 addons.go:238] Setting addon storage-provisioner=true in "ha-472903"
	I0916 23:56:55.928269  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:56:55.928296  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:55.928610  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.928740  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.954806  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:56:55.955519  804231 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0916 23:56:55.955545  804231 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0916 23:56:55.955543  804231 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0916 23:56:55.955553  804231 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 23:56:55.955611  804231 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0916 23:56:55.955620  804231 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 23:56:55.956096  804231 addons.go:238] Setting addon default-storageclass=true in "ha-472903"
	I0916 23:56:55.956145  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:56:55.956685  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.957279  804231 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:56:55.961536  804231 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:56:55.961557  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:56:55.961614  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:55.979896  804231 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:56:55.979925  804231 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:56:55.979985  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:55.982838  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:55.999402  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:56.011618  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:56:56.095355  804231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:56:56.110814  804231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:56:56.153646  804231 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:56:56.360175  804231 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0916 23:56:56.361116  804231 addons.go:514] duration metric: took 433.076562ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 23:56:56.361149  804231 start.go:246] waiting for cluster config update ...
	I0916 23:56:56.361163  804231 start.go:255] writing updated cluster config ...
	I0916 23:56:56.362407  804231 out.go:203] 
	I0916 23:56:56.363527  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:56.363621  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:56.364993  804231 out.go:179] * Starting "ha-472903-m02" control-plane node in "ha-472903" cluster
	I0916 23:56:56.365873  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:56:56.366751  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:56.367539  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:56.367556  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:56.367630  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:56.367646  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:56.367654  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:56:56.367711  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:56.386547  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:56.386565  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:56.386580  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:56.386607  804231 start.go:360] acquireMachinesLock for ha-472903-m02: {Name:mk81d8c73856cf84ceff1767a1681f3f3cdab773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:56.386700  804231 start.go:364] duration metric: took 70.184µs to acquireMachinesLock for "ha-472903-m02"
	I0916 23:56:56.386738  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:56.386824  804231 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 23:56:56.388402  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:56.388536  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:56:56.388563  804231 client.go:168] LocalClient.Create starting
	I0916 23:56:56.388626  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:56:56.388664  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:56.388687  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:56.388757  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:56:56.388789  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:56.388804  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:56.389042  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:56.404624  804231 network_create.go:77] Found existing network {name:ha-472903 subnet:0xc001d2d140 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:56:56.404653  804231 kic.go:121] calculated static IP "192.168.49.3" for the "ha-472903-m02" container
	I0916 23:56:56.404719  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:56.420231  804231 cli_runner.go:164] Run: docker volume create ha-472903-m02 --label name.minikube.sigs.k8s.io=ha-472903-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:56.436361  804231 oci.go:103] Successfully created a docker volume ha-472903-m02
	I0916 23:56:56.436430  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m02 --entrypoint /usr/bin/test -v ha-472903-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:56.943375  804231 oci.go:107] Successfully prepared a docker volume ha-472903-m02
	I0916 23:56:56.943427  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:56.943455  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:56.943528  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:01.091161  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.147592491s)
	I0916 23:57:01.091197  804231 kic.go:203] duration metric: took 4.147738136s to extract preloaded images to volume ...
	W0916 23:57:01.091312  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:01.091355  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:01.091403  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:01.142900  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903-m02 --name ha-472903-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903-m02 --network ha-472903 --ip 192.168.49.3 --volume ha-472903-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:01.378924  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Running}}
	I0916 23:57:01.396232  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.412927  804231 cli_runner.go:164] Run: docker exec ha-472903-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:01.469205  804231 oci.go:144] the created container "ha-472903-m02" has a running status.
	I0916 23:57:01.469235  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa...
	I0916 23:57:01.517570  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:01.517621  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:01.540818  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.560831  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:01.560858  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:01.615037  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.637921  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:01.638030  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.659741  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.660056  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.660078  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:01.800716  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0916 23:57:01.800749  804231 ubuntu.go:182] provisioning hostname "ha-472903-m02"
	I0916 23:57:01.800817  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.819791  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.820013  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.820030  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m02 && echo "ha-472903-m02" | sudo tee /etc/hostname
	I0916 23:57:01.967539  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0916 23:57:01.967631  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.987814  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.988031  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.988047  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:02.121536  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:02.121571  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:57:02.121588  804231 ubuntu.go:190] setting up certificates
	I0916 23:57:02.121602  804231 provision.go:84] configureAuth start
	I0916 23:57:02.121663  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.139056  804231 provision.go:143] copyHostCerts
	I0916 23:57:02.139098  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:02.139135  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:57:02.139147  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:02.139221  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:57:02.139329  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:02.139362  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:57:02.139372  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:02.139430  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:57:02.139521  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:02.139549  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:57:02.139559  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:02.139599  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:57:02.139690  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m02 san=[127.0.0.1 192.168.49.3 ha-472903-m02 localhost minikube]
	I0916 23:57:02.262354  804231 provision.go:177] copyRemoteCerts
	I0916 23:57:02.262428  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:02.262491  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.279792  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.375833  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:02.375903  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:02.400316  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:02.400373  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:02.422506  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:02.422550  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:57:02.445091  804231 provision.go:87] duration metric: took 323.464176ms to configureAuth
	I0916 23:57:02.445121  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:02.445295  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:02.445313  804231 machine.go:96] duration metric: took 807.372883ms to provisionDockerMachine
	I0916 23:57:02.445320  804231 client.go:171] duration metric: took 6.056751196s to LocalClient.Create
	I0916 23:57:02.445337  804231 start.go:167] duration metric: took 6.056804276s to libmachine.API.Create "ha-472903"
	I0916 23:57:02.445346  804231 start.go:293] postStartSetup for "ha-472903-m02" (driver="docker")
	I0916 23:57:02.445354  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:02.445402  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:02.445461  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.463550  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.559528  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:02.562755  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:02.562780  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:02.562787  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:02.562793  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:02.562803  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:57:02.562847  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:57:02.562920  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:57:02.562930  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:57:02.563018  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:02.571142  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:02.596466  804231 start.go:296] duration metric: took 151.106324ms for postStartSetup
	I0916 23:57:02.596768  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.613316  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:02.613561  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:02.613601  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.632056  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.723085  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:02.727430  804231 start.go:128] duration metric: took 6.340577447s to createHost
	I0916 23:57:02.727453  804231 start.go:83] releasing machines lock for "ha-472903-m02", held for 6.34073897s
	I0916 23:57:02.727519  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.746152  804231 out.go:179] * Found network options:
	I0916 23:57:02.747248  804231 out.go:179]   - NO_PROXY=192.168.49.2
	W0916 23:57:02.748187  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:02.748240  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:02.748311  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:02.748360  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.748367  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:02.748427  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.765286  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.766625  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.856922  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:02.936692  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:02.936761  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:02.961822  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:02.961845  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:57:02.961878  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:02.961919  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:57:02.973318  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:02.983927  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:57:02.983969  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:57:02.996091  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:57:03.009314  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:57:03.072565  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:57:03.140469  804231 docker.go:234] disabling docker service ...
	I0916 23:57:03.140526  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:57:03.157179  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:57:03.167955  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:57:03.233386  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:57:03.296537  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:03.307574  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:03.323754  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:03.334305  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:03.343767  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:03.343826  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:03.353029  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:03.361991  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:03.371206  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:03.380598  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:03.389216  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:03.398125  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:03.407145  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:03.416183  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:03.424123  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:03.432185  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:03.493561  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:03.591942  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:57:03.592010  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:57:03.595710  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:57:03.595768  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:57:03.599108  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:03.633181  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:57:03.633231  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:03.656364  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:03.680150  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:57:03.681177  804231 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:03.682053  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:03.699720  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:03.703306  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:03.714275  804231 mustload.go:65] Loading cluster: ha-472903
	I0916 23:57:03.714452  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:03.714650  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:57:03.730631  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:03.730849  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.3
	I0916 23:57:03.730859  804231 certs.go:194] generating shared ca certs ...
	I0916 23:57:03.730877  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.730987  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:57:03.731023  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:57:03.731032  804231 certs.go:256] generating profile certs ...
	I0916 23:57:03.731092  804231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:57:03.731114  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a
	I0916 23:57:03.731125  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 23:57:03.830248  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a ...
	I0916 23:57:03.830275  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a: {Name:mk3e97859392ca0d50685e4c31c19acd3c590753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.830438  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a ...
	I0916 23:57:03.830453  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a: {Name:mkd3ec6288ef831df369d4ec39839c410f5116ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.830530  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:57:03.830653  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:57:03.830779  804231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:57:03.830794  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:03.830809  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:03.830823  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:03.830836  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:03.830846  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:03.830855  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:03.830864  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:03.830873  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:03.830920  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:57:03.830952  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:03.830962  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:03.830981  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:03.831001  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:03.831021  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:57:03.831058  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:03.831081  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:57:03.831094  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:57:03.831107  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:03.831156  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:03.847964  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:03.934599  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:03.938331  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:03.950286  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:03.953541  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:03.965169  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:03.968351  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:03.979814  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:03.982969  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:03.993972  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:03.997171  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:04.008607  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:04.011687  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 23:57:04.023019  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:04.046509  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:04.069781  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:04.092702  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:04.114933  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 23:57:04.137173  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0916 23:57:04.159280  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:04.181367  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:04.203980  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:57:04.230248  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:57:04.253628  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:04.276223  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:04.293552  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:04.309978  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:04.326237  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:04.342704  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:04.359099  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 23:57:04.375242  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:04.391611  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:57:04.396637  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:57:04.405389  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.408604  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.408651  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.414862  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:04.423583  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:57:04.432421  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.435706  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.435752  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.441863  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:04.450595  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:04.459588  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.462866  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.462907  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.469279  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:04.478135  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:04.481236  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:04.481288  804231 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0916 23:57:04.481383  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:04.481425  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:04.481462  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:04.492937  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:04.492999  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:04.493041  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:04.501084  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:04.501123  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:04.509217  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 23:57:04.525587  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:04.544042  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:04.561542  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:04.564725  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:04.574819  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:04.638378  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:04.659569  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:04.659878  804231 start.go:317] joinCluster: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:04.659986  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:04.660033  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:04.678136  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:04.817608  804231 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:04.817663  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 79akng.11lpa8n1ba4yh5m1 --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0916 23:57:23.327384  804231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 79akng.11lpa8n1ba4yh5m1 --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.509693377s)
	I0916 23:57:23.327447  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:23.521334  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903-m02 minikube.k8s.io/updated_at=2025_09_16T23_57_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=false
	I0916 23:57:23.592991  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-472903-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:23.664899  804231 start.go:319] duration metric: took 19.005017018s to joinCluster
	I0916 23:57:23.664975  804231 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:23.665223  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:23.665877  804231 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:23.666680  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:23.766393  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:23.779164  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:23.779228  804231 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:23.779511  804231 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m02" to be "Ready" ...
	I0916 23:57:24.283593  804231 node_ready.go:49] node "ha-472903-m02" is "Ready"
	I0916 23:57:24.283628  804231 node_ready.go:38] duration metric: took 504.097895ms for node "ha-472903-m02" to be "Ready" ...
	I0916 23:57:24.283648  804231 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:24.283699  804231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:24.295735  804231 api_server.go:72] duration metric: took 630.723924ms to wait for apiserver process to appear ...
	I0916 23:57:24.295758  804231 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:24.295774  804231 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:24.299650  804231 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:24.300537  804231 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:24.300558  804231 api_server.go:131] duration metric: took 4.795429ms to wait for apiserver health ...
	I0916 23:57:24.300566  804231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:24.304572  804231 system_pods.go:59] 19 kube-system pods found
	I0916 23:57:24.304598  804231 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:24.304604  804231 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:24.304608  804231 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:24.304611  804231 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Pending
	I0916 23:57:24.304615  804231 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:24.304621  804231 system_pods.go:61] "kindnet-mwf8l" [8c9533d3-defe-487b-a9b4-0502fb8f2d2a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mwf8l": pod kindnet-mwf8l is being deleted, cannot be assigned to a host)
	I0916 23:57:24.304628  804231 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-q7c7s": pod kindnet-q7c7s is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304639  804231 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:24.304643  804231 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Pending
	I0916 23:57:24.304646  804231 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:24.304650  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Pending
	I0916 23:57:24.304657  804231 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-58lkb": pod kube-proxy-58lkb is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304662  804231 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:24.304666  804231 system_pods.go:61] "kube-proxy-mf26q" [34502b32-75c1-4078-abd2-4e4d625252d8] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-mf26q": pod kube-proxy-mf26q is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304670  804231 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:24.304677  804231 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Pending
	I0916 23:57:24.304679  804231 system_pods.go:61] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:24.304682  804231 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Pending
	I0916 23:57:24.304687  804231 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:24.304694  804231 system_pods.go:74] duration metric: took 4.122792ms to wait for pod list to return data ...
	I0916 23:57:24.304700  804231 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:24.307165  804231 default_sa.go:45] found service account: "default"
	I0916 23:57:24.307183  804231 default_sa.go:55] duration metric: took 2.474442ms for default service account to be created ...
	I0916 23:57:24.307190  804231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:24.310491  804231 system_pods.go:86] 19 kube-system pods found
	I0916 23:57:24.310512  804231 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:24.310517  804231 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:24.310520  804231 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:24.310524  804231 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Pending
	I0916 23:57:24.310527  804231 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:24.310532  804231 system_pods.go:89] "kindnet-mwf8l" [8c9533d3-defe-487b-a9b4-0502fb8f2d2a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mwf8l": pod kindnet-mwf8l is being deleted, cannot be assigned to a host)
	I0916 23:57:24.310556  804231 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-q7c7s": pod kindnet-q7c7s is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310566  804231 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:24.310571  804231 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Pending
	I0916 23:57:24.310576  804231 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:24.310580  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Pending
	I0916 23:57:24.310588  804231 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-58lkb": pod kube-proxy-58lkb is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310591  804231 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:24.310596  804231 system_pods.go:89] "kube-proxy-mf26q" [34502b32-75c1-4078-abd2-4e4d625252d8] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-mf26q": pod kube-proxy-mf26q is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310600  804231 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:24.310603  804231 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Pending
	I0916 23:57:24.310608  804231 system_pods.go:89] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:24.310611  804231 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Pending
	I0916 23:57:24.310614  804231 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:24.310621  804231 system_pods.go:126] duration metric: took 3.426124ms to wait for k8s-apps to be running ...
	I0916 23:57:24.310629  804231 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:24.310666  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:24.322152  804231 system_svc.go:56] duration metric: took 11.515834ms WaitForService to wait for kubelet
	I0916 23:57:24.322176  804231 kubeadm.go:578] duration metric: took 657.167547ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:24.322199  804231 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:24.327707  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:24.327734  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:24.327748  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:24.327754  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:24.327759  804231 node_conditions.go:105] duration metric: took 5.554046ms to run NodePressure ...
	I0916 23:57:24.327772  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:57:24.327803  804231 start.go:255] writing updated cluster config ...
	I0916 23:57:24.329316  804231 out.go:203] 
	I0916 23:57:24.330356  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:24.330485  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:24.331956  804231 out.go:179] * Starting "ha-472903-m03" control-plane node in "ha-472903" cluster
	I0916 23:57:24.332973  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:57:24.333962  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:24.334852  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:57:24.334875  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:24.334942  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:24.334986  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:24.334997  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:57:24.335117  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:24.357217  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:24.357233  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:24.357242  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:24.357267  804231 start.go:360] acquireMachinesLock for ha-472903-m03: {Name:mk61000bb8e4699ca3310a7fc257e30a156b69de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:24.357354  804231 start.go:364] duration metric: took 71.354µs to acquireMachinesLock for "ha-472903-m03"
	I0916 23:57:24.357375  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:24.357498  804231 start.go:125] createHost starting for "m03" (driver="docker")
	I0916 23:57:24.358917  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:24.358994  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:57:24.359023  804231 client.go:168] LocalClient.Create starting
	I0916 23:57:24.359071  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:57:24.359103  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:24.359116  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:24.359164  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:57:24.359182  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:24.359192  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:24.359366  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:24.375654  804231 network_create.go:77] Found existing network {name:ha-472903 subnet:0xc001b33bf0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:24.375684  804231 kic.go:121] calculated static IP "192.168.49.4" for the "ha-472903-m03" container
	I0916 23:57:24.375740  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:24.392165  804231 cli_runner.go:164] Run: docker volume create ha-472903-m03 --label name.minikube.sigs.k8s.io=ha-472903-m03 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:24.408273  804231 oci.go:103] Successfully created a docker volume ha-472903-m03
	I0916 23:57:24.408342  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m03 --entrypoint /usr/bin/test -v ha-472903-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:24.957699  804231 oci.go:107] Successfully prepared a docker volume ha-472903-m03
	I0916 23:57:24.957748  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:57:24.957783  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:24.957856  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:29.095091  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.13717471s)
	I0916 23:57:29.095123  804231 kic.go:203] duration metric: took 4.137337977s to extract preloaded images to volume ...
	W0916 23:57:29.095214  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:29.095253  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:29.095300  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:29.145859  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903-m03 --name ha-472903-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903-m03 --network ha-472903 --ip 192.168.49.4 --volume ha-472903-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:29.392873  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Running}}
	I0916 23:57:29.412389  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:29.430593  804231 cli_runner.go:164] Run: docker exec ha-472903-m03 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:29.476672  804231 oci.go:144] the created container "ha-472903-m03" has a running status.
	I0916 23:57:29.476707  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa...
	I0916 23:57:29.927926  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:29.927968  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:29.954518  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:29.975503  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:29.975522  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:30.023965  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:30.040966  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:30.041051  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.058157  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.058388  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.058400  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:30.190964  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0916 23:57:30.190995  804231 ubuntu.go:182] provisioning hostname "ha-472903-m03"
	I0916 23:57:30.191059  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.208862  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.209123  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.209144  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m03 && echo "ha-472903-m03" | sudo tee /etc/hostname
	I0916 23:57:30.354363  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0916 23:57:30.354466  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.372285  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.372570  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.372590  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:30.504861  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:30.504898  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:57:30.504920  804231 ubuntu.go:190] setting up certificates
	I0916 23:57:30.504933  804231 provision.go:84] configureAuth start
	I0916 23:57:30.504996  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:30.522218  804231 provision.go:143] copyHostCerts
	I0916 23:57:30.522259  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:30.522297  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:57:30.522306  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:30.522369  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:57:30.522483  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:30.522506  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:57:30.522510  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:30.522547  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:57:30.522650  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:30.522673  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:57:30.522678  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:30.522703  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:57:30.522769  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m03 san=[127.0.0.1 192.168.49.4 ha-472903-m03 localhost minikube]
	I0916 23:57:30.644066  804231 provision.go:177] copyRemoteCerts
	I0916 23:57:30.644118  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:30.644153  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.661612  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:30.757452  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:30.757504  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:30.782942  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:30.782994  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:30.806508  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:30.806562  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:30.829686  804231 provision.go:87] duration metric: took 324.735799ms to configureAuth
	I0916 23:57:30.829709  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:30.829902  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:30.829916  804231 machine.go:96] duration metric: took 788.930334ms to provisionDockerMachine
	I0916 23:57:30.829925  804231 client.go:171] duration metric: took 6.470893656s to LocalClient.Create
	I0916 23:57:30.829958  804231 start.go:167] duration metric: took 6.470963089s to libmachine.API.Create "ha-472903"
	I0916 23:57:30.829971  804231 start.go:293] postStartSetup for "ha-472903-m03" (driver="docker")
	I0916 23:57:30.829982  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:30.830042  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:30.830092  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.847215  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:30.945849  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:30.949055  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:30.949086  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:30.949098  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:30.949107  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:30.949120  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:57:30.949174  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:57:30.949274  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:57:30.949286  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:57:30.949392  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:30.957998  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:30.983779  804231 start.go:296] duration metric: took 153.794843ms for postStartSetup
	I0916 23:57:30.984109  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:31.001367  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:31.001618  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:31.001659  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.019034  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.110814  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:31.115046  804231 start.go:128] duration metric: took 6.757532739s to createHost
	I0916 23:57:31.115072  804231 start.go:83] releasing machines lock for "ha-472903-m03", held for 6.757707303s
	I0916 23:57:31.115154  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:31.133371  804231 out.go:179] * Found network options:
	I0916 23:57:31.134481  804231 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 23:57:31.135570  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135598  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135626  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135644  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:31.135714  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:31.135763  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.135778  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:31.135845  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.152320  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.153909  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.320495  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:31.348141  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:31.348214  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:31.373693  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:31.373720  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:57:31.373748  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:31.373802  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:57:31.385560  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:31.396165  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:57:31.396214  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:57:31.409119  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:57:31.422244  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:57:31.489491  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:57:31.557098  804231 docker.go:234] disabling docker service ...
	I0916 23:57:31.557149  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:57:31.574601  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:57:31.585773  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:57:31.649988  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:57:31.717070  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:31.727904  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:31.743685  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:31.755962  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:31.766072  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:31.766138  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:31.775522  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:31.785914  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:31.795134  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:31.804565  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:31.813319  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:31.822500  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:31.831597  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:31.840887  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:31.848842  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:31.857026  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:31.920521  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:32.022746  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:57:32.022804  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:57:32.026838  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:57:32.026888  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:57:32.030295  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:32.064100  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:57:32.064158  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:32.088276  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:32.114182  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:57:32.115194  804231 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:32.116236  804231 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 23:57:32.117151  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:32.133290  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:32.136901  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:32.147860  804231 mustload.go:65] Loading cluster: ha-472903
	I0916 23:57:32.148060  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:32.148275  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:57:32.164278  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:32.164570  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.4
	I0916 23:57:32.164584  804231 certs.go:194] generating shared ca certs ...
	I0916 23:57:32.164601  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.164751  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:57:32.164800  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:57:32.164814  804231 certs.go:256] generating profile certs ...
	I0916 23:57:32.164911  804231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:57:32.164940  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8
	I0916 23:57:32.164958  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 23:57:32.342596  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 ...
	I0916 23:57:32.342623  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8: {Name:mk455c3f0ae4544ddcdf75c25cbd1b87a24e61a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.342787  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8 ...
	I0916 23:57:32.342799  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8: {Name:mkbd551bf9ae23c129f7e263550d20b4aac5d095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.342871  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:57:32.343007  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:57:32.343136  804231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:57:32.343152  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:32.343165  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:32.343178  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:32.343191  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:32.343204  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:32.343214  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:32.343229  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:32.343247  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:32.343299  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:57:32.343327  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:32.343337  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:32.343357  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:32.343379  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:32.343400  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:57:32.343464  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:32.343501  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.343521  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.343534  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.343588  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:32.360782  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:32.447595  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:32.451217  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:32.464033  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:32.467273  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:32.478860  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:32.482180  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:32.493717  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:32.496761  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:32.507849  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:32.511054  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:32.523733  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:32.526954  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 23:57:32.538314  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:32.561866  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:32.585900  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:32.610048  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:32.634812  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 23:57:32.659163  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:32.682157  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:32.704663  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:32.727856  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:32.752740  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:57:32.775900  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:57:32.798720  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:32.815542  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:32.832241  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:32.848964  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:32.865780  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:32.882614  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 23:57:32.899296  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:32.916516  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:57:32.921611  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:57:32.930917  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.934241  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.934283  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.941354  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:32.950335  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:32.959292  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.962576  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.962623  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.968989  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:32.978331  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:57:32.987188  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.990463  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.990497  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.996813  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:33.005924  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:33.009122  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:33.009183  804231 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0916 23:57:33.009266  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:33.009291  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:33.009319  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:33.021189  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:33.021246  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:33.021293  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:33.029533  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:33.029576  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:33.038861  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 23:57:33.056092  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:33.075506  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:33.093918  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:33.097171  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:33.107668  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:33.167706  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:33.188453  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:33.188671  804231 start.go:317] joinCluster: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:33.188781  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:33.188819  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:33.210165  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:33.351871  804231 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:33.351930  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uj456s.97hymgg3kmg6owuv --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0916 23:57:51.860237  804231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uj456s.97hymgg3kmg6owuv --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (18.508258539s)
	I0916 23:57:51.860308  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:52.080986  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903-m03 minikube.k8s.io/updated_at=2025_09_16T23_57_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=false
	I0916 23:57:52.152525  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-472903-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:52.226560  804231 start.go:319] duration metric: took 19.037884553s to joinCluster
	I0916 23:57:52.226624  804231 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:52.226912  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:52.227744  804231 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:52.228620  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:52.334638  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:52.349036  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:52.349105  804231 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:52.349317  804231 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m03" to be "Ready" ...
	I0916 23:57:54.352346  804231 node_ready.go:49] node "ha-472903-m03" is "Ready"
	I0916 23:57:54.352374  804231 node_ready.go:38] duration metric: took 2.003044453s for node "ha-472903-m03" to be "Ready" ...
	I0916 23:57:54.352389  804231 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:54.352476  804231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:54.365259  804231 api_server.go:72] duration metric: took 2.138606454s to wait for apiserver process to appear ...
	I0916 23:57:54.365280  804231 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:54.365298  804231 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:54.370985  804231 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:54.371831  804231 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:54.371850  804231 api_server.go:131] duration metric: took 6.564025ms to wait for apiserver health ...
	I0916 23:57:54.371858  804231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:54.376785  804231 system_pods.go:59] 27 kube-system pods found
	I0916 23:57:54.376811  804231 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:54.376815  804231 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:54.376818  804231 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:54.376822  804231 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0916 23:57:54.376824  804231 system_pods.go:61] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Pending
	I0916 23:57:54.376830  804231 system_pods.go:61] "kindnet-2dqnn" [f5c4164d-0d88-4b7b-bc52-18a7e211fe98] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2dqnn": pod kindnet-2dqnn is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376833  804231 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:54.376838  804231 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0916 23:57:54.376842  804231 system_pods.go:61] "kindnet-wwdfr" [e86a6e30-712e-4d39-a235-87489d16c0f3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wwdfr": pod kindnet-wwdfr is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376849  804231 system_pods.go:61] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Pending: PodScheduled:SchedulerError (pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) is in the cache, so can't be assumed)
	I0916 23:57:54.376853  804231 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:54.376858  804231 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running
	I0916 23:57:54.376861  804231 system_pods.go:61] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Pending
	I0916 23:57:54.376867  804231 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:54.376870  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0916 23:57:54.376873  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Pending
	I0916 23:57:54.376876  804231 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0916 23:57:54.376881  804231 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:54.376885  804231 system_pods.go:61] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-kn6nb": pod kube-proxy-kn6nb is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376889  804231 system_pods.go:61] "kube-proxy-xhlnz" [1967fed1-7529-46d0-accd-ab74751b47fa] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-xhlnz": pod kube-proxy-xhlnz is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376894  804231 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:54.376897  804231 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0916 23:57:54.376900  804231 system_pods.go:61] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Pending
	I0916 23:57:54.376904  804231 system_pods.go:61] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:54.376907  804231 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0916 23:57:54.376910  804231 system_pods.go:61] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Pending
	I0916 23:57:54.376913  804231 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:54.376918  804231 system_pods.go:74] duration metric: took 5.052009ms to wait for pod list to return data ...
	I0916 23:57:54.376925  804231 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:54.378969  804231 default_sa.go:45] found service account: "default"
	I0916 23:57:54.378989  804231 default_sa.go:55] duration metric: took 2.056584ms for default service account to be created ...
	I0916 23:57:54.378999  804231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:54.383753  804231 system_pods.go:86] 27 kube-system pods found
	I0916 23:57:54.383781  804231 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:54.383790  804231 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:54.383796  804231 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:54.383802  804231 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0916 23:57:54.383812  804231 system_pods.go:89] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Pending
	I0916 23:57:54.383821  804231 system_pods.go:89] "kindnet-2dqnn" [f5c4164d-0d88-4b7b-bc52-18a7e211fe98] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2dqnn": pod kindnet-2dqnn is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383829  804231 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:54.383837  804231 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0916 23:57:54.383842  804231 system_pods.go:89] "kindnet-wwdfr" [e86a6e30-712e-4d39-a235-87489d16c0f3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wwdfr": pod kindnet-wwdfr is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383852  804231 system_pods.go:89] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Pending: PodScheduled:SchedulerError (pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) is in the cache, so can't be assumed)
	I0916 23:57:54.383863  804231 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:54.383874  804231 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running
	I0916 23:57:54.383881  804231 system_pods.go:89] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Pending
	I0916 23:57:54.383887  804231 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:54.383895  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0916 23:57:54.383900  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Pending
	I0916 23:57:54.383908  804231 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0916 23:57:54.383913  804231 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:54.383921  804231 system_pods.go:89] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-kn6nb": pod kube-proxy-kn6nb is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383930  804231 system_pods.go:89] "kube-proxy-xhlnz" [1967fed1-7529-46d0-accd-ab74751b47fa] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-xhlnz": pod kube-proxy-xhlnz is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383939  804231 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:54.383946  804231 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0916 23:57:54.383955  804231 system_pods.go:89] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Pending
	I0916 23:57:54.383962  804231 system_pods.go:89] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:54.383967  804231 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0916 23:57:54.383975  804231 system_pods.go:89] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Pending
	I0916 23:57:54.383980  804231 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:54.383991  804231 system_pods.go:126] duration metric: took 4.985254ms to wait for k8s-apps to be running ...
	I0916 23:57:54.384002  804231 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:54.384056  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:54.395540  804231 system_svc.go:56] duration metric: took 11.532177ms WaitForService to wait for kubelet
	I0916 23:57:54.395557  804231 kubeadm.go:578] duration metric: took 2.168909422s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:54.395577  804231 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:54.398165  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398183  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398194  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398197  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398201  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398205  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398209  804231 node_conditions.go:105] duration metric: took 2.627179ms to run NodePressure ...
	I0916 23:57:54.398219  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:57:54.398248  804231 start.go:255] writing updated cluster config ...
	I0916 23:57:54.398554  804231 ssh_runner.go:195] Run: rm -f paused
	I0916 23:57:54.402187  804231 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:57:54.402686  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:57:54.405144  804231 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c94hz" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.409401  804231 pod_ready.go:94] pod "coredns-66bc5c9577-c94hz" is "Ready"
	I0916 23:57:54.409438  804231 pod_ready.go:86] duration metric: took 4.271645ms for pod "coredns-66bc5c9577-c94hz" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.409448  804231 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qn8m7" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.413536  804231 pod_ready.go:94] pod "coredns-66bc5c9577-qn8m7" is "Ready"
	I0916 23:57:54.413553  804231 pod_ready.go:86] duration metric: took 4.095453ms for pod "coredns-66bc5c9577-qn8m7" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.415699  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.419599  804231 pod_ready.go:94] pod "etcd-ha-472903" is "Ready"
	I0916 23:57:54.419618  804231 pod_ready.go:86] duration metric: took 3.899664ms for pod "etcd-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.419627  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.423363  804231 pod_ready.go:94] pod "etcd-ha-472903-m02" is "Ready"
	I0916 23:57:54.423380  804231 pod_ready.go:86] duration metric: took 3.746731ms for pod "etcd-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.423386  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.603706  804231 request.go:683] "Waited before sending request" delay="180.227617ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-472903-m03"
	I0916 23:57:54.803902  804231 request.go:683] "Waited before sending request" delay="197.349252ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:55.003954  804231 request.go:683] "Waited before sending request" delay="80.206914ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-472903-m03"
	I0916 23:57:55.203362  804231 request.go:683] "Waited before sending request" delay="196.197515ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:55.206052  804231 pod_ready.go:94] pod "etcd-ha-472903-m03" is "Ready"
	I0916 23:57:55.206075  804231 pod_ready.go:86] duration metric: took 782.683771ms for pod "etcd-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.403450  804231 request.go:683] "Waited before sending request" delay="197.254129ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0916 23:57:55.406629  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.604081  804231 request.go:683] "Waited before sending request" delay="197.327981ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903"
	I0916 23:57:55.803277  804231 request.go:683] "Waited before sending request" delay="196.28238ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:55.806023  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903" is "Ready"
	I0916 23:57:55.806053  804231 pod_ready.go:86] duration metric: took 399.400731ms for pod "kube-apiserver-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.806064  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.003360  804231 request.go:683] "Waited before sending request" delay="197.181089ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903-m02"
	I0916 23:57:56.203591  804231 request.go:683] "Waited before sending request" delay="197.334062ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:56.206593  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903-m02" is "Ready"
	I0916 23:57:56.206619  804231 pod_ready.go:86] duration metric: took 400.548564ms for pod "kube-apiserver-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.206627  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.404053  804231 request.go:683] "Waited before sending request" delay="197.330591ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903-m03"
	I0916 23:57:56.603366  804231 request.go:683] "Waited before sending request" delay="196.334008ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:56.606216  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903-m03" is "Ready"
	I0916 23:57:56.606240  804231 pod_ready.go:86] duration metric: took 399.60823ms for pod "kube-apiserver-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.803696  804231 request.go:683] "Waited before sending request" delay="197.341894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0916 23:57:56.806878  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.003237  804231 request.go:683] "Waited before sending request" delay="196.261492ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903"
	I0916 23:57:57.203189  804231 request.go:683] "Waited before sending request" delay="197.16206ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:57.205847  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903" is "Ready"
	I0916 23:57:57.205870  804231 pod_ready.go:86] duration metric: took 398.97003ms for pod "kube-controller-manager-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.205878  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.403223  804231 request.go:683] "Waited before sending request" delay="197.233762ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903-m02"
	I0916 23:57:57.603503  804231 request.go:683] "Waited before sending request" delay="197.308924ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:57.606309  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903-m02" is "Ready"
	I0916 23:57:57.606331  804231 pod_ready.go:86] duration metric: took 400.447455ms for pod "kube-controller-manager-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.606339  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.803572  804231 request.go:683] "Waited before sending request" delay="197.156861ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903-m03"
	I0916 23:57:58.003564  804231 request.go:683] "Waited before sending request" delay="197.308739ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:58.006495  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903-m03" is "Ready"
	I0916 23:57:58.006527  804231 pod_ready.go:86] duration metric: took 400.177209ms for pod "kube-controller-manager-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.203971  804231 request.go:683] "Waited before sending request" delay="197.330656ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0916 23:57:58.207087  804231 pod_ready.go:83] waiting for pod "kube-proxy-58lkb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.403484  804231 request.go:683] "Waited before sending request" delay="196.298118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-58lkb"
	I0916 23:57:58.603727  804231 request.go:683] "Waited before sending request" delay="197.238459ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:58.606561  804231 pod_ready.go:94] pod "kube-proxy-58lkb" is "Ready"
	I0916 23:57:58.606586  804231 pod_ready.go:86] duration metric: took 399.476011ms for pod "kube-proxy-58lkb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.606593  804231 pod_ready.go:83] waiting for pod "kube-proxy-d4m8f" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.804003  804231 request.go:683] "Waited before sending request" delay="197.323847ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d4m8f"
	I0916 23:57:59.003937  804231 request.go:683] "Waited before sending request" delay="197.340178ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:59.006899  804231 pod_ready.go:94] pod "kube-proxy-d4m8f" is "Ready"
	I0916 23:57:59.006927  804231 pod_ready.go:86] duration metric: took 400.327971ms for pod "kube-proxy-d4m8f" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:59.006938  804231 pod_ready.go:83] waiting for pod "kube-proxy-kn6nb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:59.203366  804231 request.go:683] "Waited before sending request" delay="196.341882ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kn6nb"
	I0916 23:57:59.403608  804231 request.go:683] "Waited before sending request" delay="197.193431ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:59.604047  804231 request.go:683] "Waited before sending request" delay="96.244025ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kn6nb"
	I0916 23:57:59.803112  804231 request.go:683] "Waited before sending request" delay="196.282766ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:58:00.203120  804231 request.go:683] "Waited before sending request" delay="192.276334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:58:00.603459  804231 request.go:683] "Waited before sending request" delay="93.218157ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	W0916 23:58:01.014543  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:03.512871  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:06.012965  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:08.512763  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:11.012966  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:13.013166  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:15.512655  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:18.012615  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:20.513188  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:23.012908  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:25.013240  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:27.512733  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:30.012142  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:32.012503  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:34.013070  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:36.512643  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	I0916 23:58:37.014670  804231 pod_ready.go:94] pod "kube-proxy-kn6nb" is "Ready"
	I0916 23:58:37.014697  804231 pod_ready.go:86] duration metric: took 38.007753603s for pod "kube-proxy-kn6nb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.017732  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.022228  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903" is "Ready"
	I0916 23:58:37.022246  804231 pod_ready.go:86] duration metric: took 4.488553ms for pod "kube-scheduler-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.022253  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.026173  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903-m02" is "Ready"
	I0916 23:58:37.026191  804231 pod_ready.go:86] duration metric: took 3.932068ms for pod "kube-scheduler-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.026198  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.030029  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903-m03" is "Ready"
	I0916 23:58:37.030046  804231 pod_ready.go:86] duration metric: took 3.843487ms for pod "kube-scheduler-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.030054  804231 pod_ready.go:40] duration metric: took 42.627839542s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:58:37.073472  804231 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0916 23:58:37.074923  804231 out.go:179] * Done! kubectl is now configured to use "ha-472903" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0a41d8b587e02       8c811b4aec35f       12 minutes ago      Running             busybox                   0                   a2422ee3e6e6d       busybox-7b57f96db7-6hrm6
	f33de265effb1       6e38f40d628db       13 minutes ago      Running             storage-provisioner       1                   1c0713f862ea0       storage-provisioner
	9f103b05d2d6f       52546a367cc9e       13 minutes ago      Running             coredns                   0                   9579263342827       coredns-66bc5c9577-c94hz
	3b457407f10e3       52546a367cc9e       13 minutes ago      Running             coredns                   0                   290cfb537788e       coredns-66bc5c9577-qn8m7
	cc69d2451cb65       409467f978b4a       13 minutes ago      Running             kindnet-cni               0                   3e17d6ae9b2a6       kindnet-lh7dv
	f4767b6363ce9       6e38f40d628db       13 minutes ago      Exited              storage-provisioner       0                   1c0713f862ea0       storage-provisioner
	92dd4d116eb03       df0860106674d       13 minutes ago      Running             kube-proxy                0                   8c0ecd5301326       kube-proxy-d4m8f
	3cb75495f7a54       765655ea60781       13 minutes ago      Running             kube-vip                  0                   4c425da29992d       kube-vip-ha-472903
	bba28cace6502       46169d968e920       14 minutes ago      Running             kube-scheduler            0                   f18dd7697c60f       kube-scheduler-ha-472903
	087290a41f59c       a0af72f2ec6d6       14 minutes ago      Running             kube-controller-manager   0                   0760ebe1d2a56       kube-controller-manager-ha-472903
	0aba62132d764       90550c43ad2bc       14 minutes ago      Running             kube-apiserver            0                   8ad1fa8bc0267       kube-apiserver-ha-472903
	23c0af0bdbe95       5f1f5298c888d       14 minutes ago      Running             etcd                      0                   b01a62742caec       etcd-ha-472903
	
	
	==> containerd <==
	Sep 16 23:57:20 ha-472903 containerd[765]: time="2025-09-16T23:57:20.857383931Z" level=info msg="StartContainer for \"9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315\""
	Sep 16 23:57:20 ha-472903 containerd[765]: time="2025-09-16T23:57:20.915209442Z" level=info msg="StartContainer for \"9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315\" returns successfully"
	Sep 16 23:57:26 ha-472903 containerd[765]: time="2025-09-16T23:57:26.847849669Z" level=info msg="received exit event container_id:\"f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8\"  id:\"f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8\"  pid:2188  exit_status:1  exited_at:{seconds:1758067046  nanos:847300745}"
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084468964Z" level=info msg="shim disconnected" id=f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8 namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084514637Z" level=warning msg="cleaning up after shim disconnected" id=f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8 namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084528446Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.861023305Z" level=info msg="CreateContainer within sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.875038922Z" level=info msg="CreateContainer within sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\""
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.875884762Z" level=info msg="StartContainer for \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\""
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.929708067Z" level=info msg="StartContainer for \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\" returns successfully"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.362974621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-6hrm6,Uid:bd03bad4-af1e-42d0-81fb-6fcaeaa8775e,Namespace:default,Attempt:0,}"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.455106923Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.455480779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-6hrm6,Uid:bd03bad4-af1e-42d0-81fb-6fcaeaa8775e,Namespace:default,Attempt:0,} returns sandbox id \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\""
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.457290181Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.440332779Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.440968214Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.442025332Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.443719507Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.444221405Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 1.986887608s"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.444254598Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.447875079Z" level=info msg="CreateContainer within sandbox \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.457018566Z" level=info msg="CreateContainer within sandbox \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.457508138Z" level=info msg="StartContainer for \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.510633374Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.512731136Z" level=info msg="StartContainer for \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\" returns successfully"
	
	
	==> coredns [3b457407f10e357ce33da7fa3fb4333f8312f0d3e3570cf8528cdcac8f5a1d0f] <==
	[INFO] 10.244.1.2:57899 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.012540337s
	[INFO] 10.244.1.2:54323 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.008980197s
	[INFO] 10.244.1.2:53799 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.009949044s
	[INFO] 10.244.0.4:39485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157098s
	[INFO] 10.244.0.4:57871 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000750185s
	[INFO] 10.244.0.4:53410 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000089028s
	[INFO] 10.244.1.2:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150317s
	[INFO] 10.244.1.2:59346 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028128363s
	[INFO] 10.244.1.2:43091 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01004668s
	[INFO] 10.244.1.2:37227 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000191819s
	[INFO] 10.244.1.2:40079 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125376s
	[INFO] 10.244.0.4:38168 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181114s
	[INFO] 10.244.0.4:60067 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000087147s
	[INFO] 10.244.0.4:47611 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122939s
	[INFO] 10.244.0.4:37626 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121195s
	[INFO] 10.244.1.2:42817 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159509s
	[INFO] 10.244.1.2:33910 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186538s
	[INFO] 10.244.1.2:37929 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109836s
	[INFO] 10.244.0.4:50698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212263s
	[INFO] 10.244.0.4:33166 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100167s
	[INFO] 10.244.1.2:50377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157558s
	[INFO] 10.244.1.2:39491 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132025s
	[INFO] 10.244.1.2:50075 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112028s
	[INFO] 10.244.0.4:58743 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149175s
	[INFO] 10.244.0.4:52796 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114946s
	
	
	==> coredns [9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45239 - 14115 "HINFO IN 5883645869461503498.3950535614037284853. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058516241s
	[INFO] 10.244.1.2:55352 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003252862s
	[INFO] 10.244.0.4:33650 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001640931s
	[INFO] 10.244.0.4:50077 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000621363s
	[INFO] 10.244.1.2:48439 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189187s
	[INFO] 10.244.1.2:39582 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151327s
	[INFO] 10.244.1.2:59539 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140715s
	[INFO] 10.244.0.4:42999 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177514s
	[INFO] 10.244.0.4:36769 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010694753s
	[INFO] 10.244.0.4:53074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158932s
	[INFO] 10.244.0.4:57223 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012213s
	[INFO] 10.244.1.2:50810 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176678s
	[INFO] 10.244.0.4:58045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142445s
	[INFO] 10.244.0.4:39777 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123555s
	[INFO] 10.244.1.2:59022 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148853s
	[INFO] 10.244.0.4:45136 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001657s
	[INFO] 10.244.0.4:37711 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134332s
	
	
	==> describe nodes <==
	Name:               ha-472903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_56_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:56:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:10:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-472903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac22e2ab5b0349cdb9474983aa23278e
	  System UUID:                695af4c7-28fb-4299-9454-75db3262ca2c
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6hrm6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-c94hz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 coredns-66bc5c9577-qn8m7             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 etcd-ha-472903                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-lh7dv                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-472903             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-472903    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-d4m8f                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-472903             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-472903                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           12m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	
	
	Name:               ha-472903-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:10:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-472903-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 4094672df3d84509ae4c88c54f7f5e93
	  System UUID:                85df9db8-f21a-4038-9f8c-4cc1d81dc0d5
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-4jfjt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-472903-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-q7c7s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-472903-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-472903-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-58lkb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-472903-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-472903-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        13m   kube-proxy       
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode  12m   node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	
	
	Name:               ha-472903-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:10:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-472903-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 9964c713c65f4333be8a877aab744040
	  System UUID:                7eb7f2ee-a32d-4876-a4ad-58f745b9c377
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-mknzs                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-472903-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-x6twd                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-472903-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-472903-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kn6nb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-472903-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-472903-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  12m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode  12m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode  12m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 e8 75 4b 01 57 08 06
	[  +0.025562] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[ +13.150028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 5c f0 26 cd ba 08 06
	[  +0.000341] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 20 90 fb f5 d8 08 06
	[ +28.639349] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 26 63 8d db 90 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[  +0.836892] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 cc 9b 52 38 94 08 06
	[  +0.080327] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	[Sep16 23:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[ +20.325550] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 39 4b 41 df 63 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[  +8.925776] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e cd c1 f7 dc c8 08 06
	[  +0.000373] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	
	
	==> etcd [23c0af0bdbe9526d53769461ed9f80d8c743b02e625b65cce39c888f5e7d4b4e] <==
	{"level":"info","ts":"2025-09-16T23:57:38.284368Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.288146Z","caller":"etcdserver/server.go:1838","msg":"sending merged snapshot","from":"aec36adc501070cc","to":"ab9d0391dce79465","bytes":1356737,"size":"1.4 MB"}
	{"level":"info","ts":"2025-09-16T23:57:38.288252Z","caller":"rafthttp/snapshot_sender.go:82","msg":"sending database snapshot","snapshot-index":679,"remote-peer-id":"ab9d0391dce79465","bytes":1356737,"size":"1.4 MB"}
	{"level":"info","ts":"2025-09-16T23:57:38.293060Z","caller":"etcdserver/snapshot_merge.go:64","msg":"sent database snapshot to writer","bytes":1347584,"size":"1.3 MB"}
	{"level":"info","ts":"2025-09-16T23:57:38.299128Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":679,"remote-peer-id":"ab9d0391dce79465","bytes":1356737,"size":"1.4 MB"}
	{"level":"info","ts":"2025-09-16T23:57:38.314973Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.321619Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"ab9d0391dce79465","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-16T23:57:38.321647Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.321659Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.321995Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.324746Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"ab9d0391dce79465","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-09-16T23:57:38.324782Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.324796Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-16T23:57:38.539376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:45372","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:57:38.542781Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(4226730353838347643 12366044076840555621 12593026477526642892)"}
	{"level":"info","ts":"2025-09-16T23:57:38.542928Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.542988Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:40.311787Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"3aa85cdcd5e5557b","bytes":876533,"size":"876 kB","took":"30.009467109s"}
	{"level":"info","ts":"2025-09-16T23:57:47.400606Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:57:51.874557Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:06.103123Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:08.299219Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"ab9d0391dce79465","bytes":1356737,"size":"1.4 MB","took":"30.011071692s"}
	{"level":"info","ts":"2025-09-17T00:06:46.502551Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1554}
	{"level":"info","ts":"2025-09-17T00:06:46.523688Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1554,"took":"20.616779ms","hash":4277915431,"current-db-size-bytes":3936256,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2105344,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-09-17T00:06:46.523839Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4277915431,"revision":1554,"compact-revision":-1}
	
	
	==> kernel <==
	 00:10:47 up  2:53,  0 users,  load average: 0.30, 0.37, 0.81
	Linux ha-472903 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [cc69d2451cb65860b5bc78e027be2fc1cb0f9fa6542b4abe3bc1ff1c90a8fe60] <==
	I0917 00:09:57.503751       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:10:07.510274       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:07.510320       1 main.go:301] handling current node
	I0917 00:10:07.510336       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:10:07.510341       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:10:07.510554       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:10:07.510567       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:10:17.512521       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:17.512563       1 main.go:301] handling current node
	I0917 00:10:17.512582       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:10:17.512589       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:10:17.512785       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:10:17.512800       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:10:27.511383       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:27.511448       1 main.go:301] handling current node
	I0917 00:10:27.511469       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:10:27.511476       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:10:27.511660       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:10:27.511671       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:10:37.506147       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:37.506186       1 main.go:301] handling current node
	I0917 00:10:37.506204       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:10:37.506209       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:10:37.506448       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:10:37.506459       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0aba62132d764965d8e1a80a4a6345bb7e34892b23143da4a7af3450cd465d6c] <==
	I0917 00:04:42.485311       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:05:30.551003       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:06:06.800617       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:06:32.710262       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:06:47.441344       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:07:34.732036       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:07:42.022448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:08:46.236959       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:08:51.159386       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:52.603432       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:53.014406       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0917 00:10:41.954540       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37534: use of closed network connection
	E0917 00:10:42.122977       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37556: use of closed network connection
	E0917 00:10:42.250606       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37572: use of closed network connection
	E0917 00:10:42.442469       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37584: use of closed network connection
	E0917 00:10:42.605380       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37602: use of closed network connection
	E0917 00:10:42.730284       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37612: use of closed network connection
	E0917 00:10:42.884291       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37626: use of closed network connection
	E0917 00:10:43.036952       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37644: use of closed network connection
	E0917 00:10:43.161098       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37658: use of closed network connection
	E0917 00:10:45.408563       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37722: use of closed network connection
	E0917 00:10:45.568465       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37752: use of closed network connection
	E0917 00:10:45.727267       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37770: use of closed network connection
	E0917 00:10:45.883182       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37790: use of closed network connection
	E0917 00:10:46.004301       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37814: use of closed network connection
	
	
	==> kube-controller-manager [087290a41f59caa4f9bc89759bcec6cf90f47c8a2ab83b7c671a8fff35641df9] <==
	I0916 23:56:54.728442       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0916 23:56:54.728466       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:56:54.728485       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0916 23:56:54.728644       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0916 23:56:54.728665       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0916 23:56:54.728648       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0916 23:56:54.728914       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0916 23:56:54.730175       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0916 23:56:54.730201       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0916 23:56:54.732432       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:56:54.733452       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:56:54.735655       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:56:54.735714       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:56:54.735760       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:56:54.735767       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:56:54.735772       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:56:54.740680       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903" podCIDRs=["10.244.0.0/24"]
	I0916 23:56:54.749950       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:22.933124       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m02\" does not exist"
	I0916 23:57:22.943785       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:24.681339       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m02"
	I0916 23:57:51.749676       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m03\" does not exist"
	I0916 23:57:51.772476       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m03" podCIDRs=["10.244.2.0/24"]
	E0916 23:57:51.829801       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"3f5da9fc-6769-4ca8-a715-edeace44c646\", ResourceVersion:\"594\", Generation:1, CreationTimestamp:time.Date(2025, time.September, 16, 23, 56, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00222d0e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"
\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSourc
e)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0021ed7c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdcf8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtu
alDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.34.0\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00144a7e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Re
sourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Life
cycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0019549c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001900b18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ba1200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", Tole
rationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e14570)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001900b70)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailab
le:2, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:57:54.685322       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m03"
	
	
	==> kube-proxy [92dd4d116eb0387dded82fb32d35690ec2d00e3f5e7ac81bf7aea0c6814edd5e] <==
	I0916 23:56:56.831012       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:56:56.891635       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:56:56.991820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:56:56.991862       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:56:56.991952       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:56:57.015955       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:56:57.016001       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:56:57.021120       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:56:57.021457       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:56:57.021499       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:56:57.024872       1 config.go:200] "Starting service config controller"
	I0916 23:56:57.024892       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:56:57.024900       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:56:57.024909       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:56:57.024890       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:56:57.024917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:56:57.024937       1 config.go:309] "Starting node config controller"
	I0916 23:56:57.024942       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:56:57.125608       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:56:57.125691       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0916 23:56:57.125856       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:56:57.125902       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [bba28cace6502de93aa43db4fb51671581c5074990dea721d98d36d839734a67] <==
	E0916 23:56:48.619869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:56:48.649766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:56:48.673092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I0916 23:56:49.170967       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 23:57:51.780040       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:57:51.780142       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	E0916 23:57:51.780183       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	I0916 23:57:51.782132       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:58:37.948695       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	E0916 23:58:37.948846       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 565a634f-ab41-4776-ba5d-63a601bfec48(default/busybox-7b57f96db7-x6xc9) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	E0916 23:58:37.948875       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	I0916 23:58:37.950251       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	I0916 23:58:37.966099       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="47b06c15-c007-4c50-a248-5411a0f4b6a7" pod="default/busybox-7b57f96db7-4jfjt" assumedNode="ha-472903-m02" currentNode="ha-472903"
	E0916 23:58:37.968241       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903"
	E0916 23:58:37.968351       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 47b06c15-c007-4c50-a248-5411a0f4b6a7(default/busybox-7b57f96db7-4jfjt) was assumed on ha-472903 but assigned to ha-472903-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	E0916 23:58:37.968376       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	I0916 23:58:37.969472       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903-m02"
	E0916 23:58:38.002469       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-wp95z" node="ha-472903"
	E0916 23:58:38.002779       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:38.046394       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-xnrsc\" not found" pod="default/busybox-7b57f96db7-xnrsc"
	E0916 23:58:38.046880       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-wp95z\" not found" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:40.050124       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	E0916 23:58:40.050213       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod bd03bad4-af1e-42d0-81fb-6fcaeaa8775e(default/busybox-7b57f96db7-6hrm6) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	E0916 23:58:40.050248       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	I0916 23:58:40.051853       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	
	
	==> kubelet <==
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.235025    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62 podName:cc7a8d10-408f-4655-ac70-54b4af22d9eb nodeName:}" failed. No retries permitted until 2025-09-16 23:58:38.735007966 +0000 UTC m=+109.066439678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hrb62" (UniqueName: "kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62") pod "busybox-7b57f96db7-5pwbb" (UID: "cc7a8d10-408f-4655-ac70-54b4af22d9eb") : failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737179    1676 projected.go:196] Error preparing data for projected volume kube-api-access-xrpwc for pod default/busybox-7b57f96db7-xj7ks: failed to fetch token: pod "busybox-7b57f96db7-xj7ks" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737266    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc podName:cac915f6-7630-4320-b6d2-fd18f3c19a17 nodeName:}" failed. No retries permitted until 2025-09-16 23:58:39.737245356 +0000 UTC m=+110.068677057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xrpwc" (UniqueName: "kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc") pod "busybox-7b57f96db7-xj7ks" (UID: "cac915f6-7630-4320-b6d2-fd18f3c19a17") : failed to fetch token: pod "busybox-7b57f96db7-xj7ks" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737179    1676 projected.go:196] Error preparing data for projected volume kube-api-access-hrb62 for pod default/busybox-7b57f96db7-5pwbb: failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737371    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62 podName:cc7a8d10-408f-4655-ac70-54b4af22d9eb nodeName:}" failed. No retries permitted until 2025-09-16 23:58:39.737351933 +0000 UTC m=+110.068783647 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hrb62" (UniqueName: "kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62") pod "busybox-7b57f96db7-5pwbb" (UID: "cc7a8d10-408f-4655-ac70-54b4af22d9eb") : failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.027158    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.028111    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.039445    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.042381    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138755    1676 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9njqf\" (UniqueName: \"kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf\") pod \"59b9a23c-498d-4802-9790-70931c4a2c06\" (UID: \"59b9a23c-498d-4802-9790-70931c4a2c06\") "
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138821    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hrb62\" (UniqueName: \"kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138836    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xrpwc\" (UniqueName: \"kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.140952    1676 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf" (OuterVolumeSpecName: "kube-api-access-9njqf") pod "59b9a23c-498d-4802-9790-70931c4a2c06" (UID: "59b9a23c-498d-4802-9790-70931c4a2c06"). InnerVolumeSpecName "kube-api-access-9njqf". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.239025    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9njqf\" (UniqueName: \"kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.752137    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.753199    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.754268    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" path="/var/lib/kubelet/pods/cac915f6-7630-4320-b6d2-fd18f3c19a17/volumes"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.754475    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" path="/var/lib/kubelet/pods/cc7a8d10-408f-4655-ac70-54b4af22d9eb/volumes"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.056772    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.057611    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.059208    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.060512    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: I0916 23:58:40.145054    1676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjkrp\" (UniqueName: \"kubernetes.io/projected/bd03bad4-af1e-42d0-81fb-6fcaeaa8775e-kube-api-access-pjkrp\") pod \"busybox-7b57f96db7-6hrm6\" (UID: \"bd03bad4-af1e-42d0-81fb-6fcaeaa8775e\") " pod="default/busybox-7b57f96db7-6hrm6"
	Sep 16 23:58:41 ha-472903 kubelet[1676]: I0916 23:58:41.754549    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59b9a23c-498d-4802-9790-70931c4a2c06" path="/var/lib/kubelet/pods/59b9a23c-498d-4802-9790-70931c4a2c06/volumes"
	Sep 16 23:58:43 ha-472903 kubelet[1676]: I0916 23:58:43.049200    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7b57f96db7-6hrm6" podStartSLOduration=3.061025393 podStartE2EDuration="5.049179166s" podCreationTimestamp="2025-09-16 23:58:38 +0000 UTC" firstStartedPulling="2025-09-16 23:58:40.45690156 +0000 UTC m=+110.788333264" lastFinishedPulling="2025-09-16 23:58:42.445055322 +0000 UTC m=+112.776487037" observedRunningTime="2025-09-16 23:58:43.049092106 +0000 UTC m=+113.380523828" watchObservedRunningTime="2025-09-16 23:58:43.049179166 +0000 UTC m=+113.380610888"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-472903 -n ha-472903
helpers_test.go:269: (dbg) Run:  kubectl --context ha-472903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-mknzs
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-472903 describe pod busybox-7b57f96db7-mknzs
helpers_test.go:290: (dbg) kubectl --context ha-472903 describe pod busybox-7b57f96db7-mknzs:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-mknzs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ha-472903-m03/192.168.49.4
	Start Time:       Tue, 16 Sep 2025 23:58:37 +0000
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmz92 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gmz92:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                  Age                  From               Message
	  ----     ------                  ----                 ----               -------
	  Warning  FailedScheduling        12m                  default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-mknzs": pod busybox-7b57f96db7-mknzs is already assigned to node "ha-472903-m03"
	  Warning  FailedScheduling        12m                  default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-mknzs": pod busybox-7b57f96db7-mknzs is already assigned to node "ha-472903-m03"
	  Normal   Scheduled               12m                  default-scheduler  Successfully assigned default/busybox-7b57f96db7-mknzs to ha-472903-m03
	  Warning  FailedCreatePodSandBox  12m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "72439adc47052c2da00cee62587d780275cf6c2423dee9831567464d4725ee9d": failed to find network info for sandbox "72439adc47052c2da00cee62587d780275cf6c2423dee9831567464d4725ee9d"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "24ab8b6bd2f38653d2326c375fc81ebf17317e36885547c7b42c011bb95889ed": failed to find network info for sandbox "24ab8b6bd2f38653d2326c375fc81ebf17317e36885547c7b42c011bb95889ed"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "300fece4c100bc3e68a19e1fa6f46c8a378753727caaaeb1533dab71f234be58": failed to find network info for sandbox "300fece4c100bc3e68a19e1fa6f46c8a378753727caaaeb1533dab71f234be58"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e49a14b4de5e24fa450a43c124b2916ad7028d35cbc3b0f74595e68ee161d1d0": failed to find network info for sandbox "e49a14b4de5e24fa450a43c124b2916ad7028d35cbc3b0f74595e68ee161d1d0"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "efa290ca498f7c70ae29d8d97709edda97bc6b062aac05a3ef6d6a83fbd42797": failed to find network info for sandbox "efa290ca498f7c70ae29d8d97709edda97bc6b062aac05a3ef6d6a83fbd42797"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d5851ce1270b1c8994400ecd7bdabadaf895488957ffb5173dcd7e289db1de6c": failed to find network info for sandbox "d5851ce1270b1c8994400ecd7bdabadaf895488957ffb5173dcd7e289db1de6c"
	  Warning  FailedCreatePodSandBox  10m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "11aaa894ae434b08da8122c8f3445d03b4c1e54dfb071596f63a0e4654f49f10": failed to find network info for sandbox "11aaa894ae434b08da8122c8f3445d03b4c1e54dfb071596f63a0e4654f49f10"
	  Warning  FailedCreatePodSandBox  10m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c8126e80126ff891a4935c60cfec55753f6bb51d789c0eb46098b72267c7d53c": failed to find network info for sandbox "c8126e80126ff891a4935c60cfec55753f6bb51d789c0eb46098b72267c7d53c"
	  Warning  FailedCreatePodSandBox  10m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1389a2f92f350a6f495c76f80031300b6442a6a0cc67abd4b045ff9150b3fc3a": failed to find network info for sandbox "1389a2f92f350a6f495c76f80031300b6442a6a0cc67abd4b045ff9150b3fc3a"
	  Warning  FailedCreatePodSandBox  2m5s (x38 over 10m)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c3a9afe91461f3ea405980387ac5fab85785c7cf3f180d2b0f894e1df94ca62d": failed to find network info for sandbox "c3a9afe91461f3ea405980387ac5fab85785c7cf3f180d2b0f894e1df94ca62d"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (2.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (29.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 node add --alsologtostderr -v 5
E0917 00:10:49.958727  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 node add --alsologtostderr -v 5: exit status 80 (27.833101767s)

                                                
                                                
-- stdout --
	* Adding node m04 to cluster ha-472903 as [worker]
	* Starting "ha-472903-m04" worker node in "ha-472903" cluster
	* Pulling base image v0.0.48 ...
	* Stopping node "ha-472903-m04"  ...
	* Deleting "ha-472903-m04" in docker ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:10:48.017850  821000 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:10:48.017965  821000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:10:48.017974  821000 out.go:374] Setting ErrFile to fd 2...
	I0917 00:10:48.017978  821000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:10:48.018189  821000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:10:48.018575  821000 mustload.go:65] Loading cluster: ha-472903
	I0917 00:10:48.019059  821000 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:10:48.019669  821000 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:10:48.038089  821000 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:10:48.038310  821000 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:10:48.094809  821000 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:10:48.085580364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:10:48.095158  821000 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:10:48.112679  821000 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:10:48.113230  821000 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:10:48.130841  821000 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:10:48.131093  821000 api_server.go:166] Checking apiserver status ...
	I0917 00:10:48.131154  821000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:10:48.131218  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:10:48.148351  821000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:10:48.247755  821000 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup
	W0917 00:10:48.257180  821000 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:10:48.257237  821000 ssh_runner.go:195] Run: ls
	I0917 00:10:48.260472  821000 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:10:48.264737  821000 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:10:48.266314  821000 out.go:179] * Adding node m04 to cluster ha-472903 as [worker]
	I0917 00:10:48.267705  821000 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:10:48.267820  821000 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:10:48.269278  821000 out.go:179] * Starting "ha-472903-m04" worker node in "ha-472903" cluster
	I0917 00:10:48.270242  821000 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:10:48.271170  821000 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:10:48.272084  821000 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:10:48.272117  821000 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0917 00:10:48.272126  821000 cache.go:58] Caching tarball of preloaded images
	I0917 00:10:48.272173  821000 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:10:48.272214  821000 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:10:48.272226  821000 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:10:48.272326  821000 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:10:48.292241  821000 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:10:48.292260  821000 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:10:48.292273  821000 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:10:48.292295  821000 start.go:360] acquireMachinesLock for ha-472903-m04: {Name:mkdbbd0d5b3cd7ad4b13d37f2d562d6d6421c5de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:10:48.292384  821000 start.go:364] duration metric: took 66.361µs to acquireMachinesLock for "ha-472903-m04"
	I0917 00:10:48.292406  821000 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0917 00:10:48.292525  821000 start.go:125] createHost starting for "m04" (driver="docker")
	I0917 00:10:48.294186  821000 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:10:48.294280  821000 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0917 00:10:48.294308  821000 client.go:168] LocalClient.Create starting
	I0917 00:10:48.294390  821000 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0917 00:10:48.294436  821000 main.go:141] libmachine: Decoding PEM data...
	I0917 00:10:48.294453  821000 main.go:141] libmachine: Parsing certificate...
	I0917 00:10:48.294521  821000 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0917 00:10:48.294544  821000 main.go:141] libmachine: Decoding PEM data...
	I0917 00:10:48.294555  821000 main.go:141] libmachine: Parsing certificate...
	I0917 00:10:48.294740  821000 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:10:48.311245  821000 network_create.go:77] Found existing network {name:ha-472903 subnet:0xc001330510 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0917 00:10:48.311286  821000 kic.go:121] calculated static IP "192.168.49.5" for the "ha-472903-m04" container
	I0917 00:10:48.311346  821000 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:10:48.329223  821000 cli_runner.go:164] Run: docker volume create ha-472903-m04 --label name.minikube.sigs.k8s.io=ha-472903-m04 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:10:48.346018  821000 oci.go:103] Successfully created a docker volume ha-472903-m04
	I0917 00:10:48.346093  821000 cli_runner.go:164] Run: docker run --rm --name ha-472903-m04-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m04 --entrypoint /usr/bin/test -v ha-472903-m04:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:10:48.701770  821000 oci.go:107] Successfully prepared a docker volume ha-472903-m04
	I0917 00:10:48.701808  821000 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:10:48.701828  821000 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:10:48.701876  821000 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:10:52.927853  821000 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.225928961s)
	I0917 00:10:52.927895  821000 kic.go:203] duration metric: took 4.22606153s to extract preloaded images to volume ...
	W0917 00:10:52.928017  821000 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:10:52.928052  821000 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:10:52.928101  821000 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:10:52.980868  821000 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903-m04 --name ha-472903-m04 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m04 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903-m04 --network ha-472903 --ip 192.168.49.5 --volume ha-472903-m04:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:10:53.237890  821000 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Running}}
	I0917 00:10:53.255501  821000 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:10:53.272665  821000 cli_runner.go:164] Run: docker exec ha-472903-m04 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:10:53.320181  821000 oci.go:144] the created container "ha-472903-m04" has a running status.
	I0917 00:10:53.320211  821000 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m04/id_rsa...
	I0917 00:10:53.678778  821000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m04/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:10:53.678830  821000 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m04/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:10:53.707873  821000 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:10:53.726926  821000 cli_runner.go:164] Run: docker inspect ha-472903-m04
	I0917 00:10:53.743831  821000 errors.go:84] Postmortem inspect ("docker inspect ha-472903-m04"): -- stdout --
	[
	    {
	        "Id": "c1cb7be46c63273125821905eca7927f91d4191029f350af0778b4a946ccc8b3",
	        "Created": "2025-09-17T00:10:52.99711208Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 255,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:10:53.028937258Z",
	            "FinishedAt": "2025-09-17T00:10:53.375878708Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/c1cb7be46c63273125821905eca7927f91d4191029f350af0778b4a946ccc8b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c1cb7be46c63273125821905eca7927f91d4191029f350af0778b4a946ccc8b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/c1cb7be46c63273125821905eca7927f91d4191029f350af0778b4a946ccc8b3/hosts",
	        "LogPath": "/var/lib/docker/containers/c1cb7be46c63273125821905eca7927f91d4191029f350af0778b4a946ccc8b3/c1cb7be46c63273125821905eca7927f91d4191029f350af0778b4a946ccc8b3-json.log",
	        "Name": "/ha-472903-m04",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-472903-m04:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-472903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c1cb7be46c63273125821905eca7927f91d4191029f350af0778b4a946ccc8b3",
	                "LowerDir": "/var/lib/docker/overlay2/20085433dba2990e0d86643ef96743bd4d985f74f69b8ecda5c9c5af2ac13c3e-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20085433dba2990e0d86643ef96743bd4d985f74f69b8ecda5c9c5af2ac13c3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20085433dba2990e0d86643ef96743bd4d985f74f69b8ecda5c9c5af2ac13c3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20085433dba2990e0d86643ef96743bd4d985f74f69b8ecda5c9c5af2ac13c3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-472903-m04",
	                "Source": "/var/lib/docker/volumes/ha-472903-m04/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-472903-m04",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-472903-m04",
	                "name.minikube.sigs.k8s.io": "ha-472903-m04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-472903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.5"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "22d49b2f397dfabc2a3967bd54b05204a52976e683f65ff07bff00e793040bef",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-472903-m04",
	                        "c1cb7be46c63"
	                    ]
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0917 00:10:53.743898  821000 cli_runner.go:164] Run: docker logs --timestamps --details ha-472903-m04
	I0917 00:10:53.762322  821000 errors.go:91] Postmortem logs ("docker logs --timestamps --details ha-472903-m04"): -- stdout --
	2025-09-17T00:10:53.231818168Z  + userns=
	2025-09-17T00:10:53.231860765Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2025-09-17T00:10:53.234441023Z  + validate_userns
	2025-09-17T00:10:53.234459999Z  + [[ -z '' ]]
	2025-09-17T00:10:53.234462575Z  + return
	2025-09-17T00:10:53.234464253Z  + configure_containerd
	2025-09-17T00:10:53.234465983Z  + local snapshotter=
	2025-09-17T00:10:53.234467737Z  + [[ -n '' ]]
	2025-09-17T00:10:53.234521041Z  + [[ -z '' ]]
	2025-09-17T00:10:53.235028531Z  ++ stat -f -c %T /kind
	2025-09-17T00:10:53.236545491Z  + container_filesystem=overlayfs
	2025-09-17T00:10:53.236561697Z  + [[ overlayfs == \z\f\s ]]
	2025-09-17T00:10:53.236565547Z  + [[ -n '' ]]
	2025-09-17T00:10:53.236568388Z  + configure_proxy
	2025-09-17T00:10:53.236571237Z  + mkdir -p /etc/systemd/system.conf.d/
	2025-09-17T00:10:53.242847732Z  + [[ ! -z '' ]]
	2025-09-17T00:10:53.242861379Z  + cat
	2025-09-17T00:10:53.243967937Z  + fix_mount
	2025-09-17T00:10:53.243982070Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2025-09-17T00:10:53.243985004Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2025-09-17T00:10:53.244373522Z  ++ which mount
	2025-09-17T00:10:53.245634593Z  ++ which umount
	2025-09-17T00:10:53.246531238Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2025-09-17T00:10:53.253747008Z  ++ which mount
	2025-09-17T00:10:53.255036258Z  ++ which umount
	2025-09-17T00:10:53.255979983Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2025-09-17T00:10:53.257558984Z  +++ which mount
	2025-09-17T00:10:53.258387414Z  ++ stat -f -c %T /usr/bin/mount
	2025-09-17T00:10:53.259294327Z  + [[ overlayfs == \a\u\f\s ]]
	2025-09-17T00:10:53.259307185Z  + echo 'INFO: remounting /sys read-only'
	2025-09-17T00:10:53.259310601Z  INFO: remounting /sys read-only
	2025-09-17T00:10:53.259313325Z  + mount -o remount,ro /sys
	2025-09-17T00:10:53.260996138Z  + echo 'INFO: making mounts shared'
	2025-09-17T00:10:53.261010122Z  INFO: making mounts shared
	2025-09-17T00:10:53.261013440Z  + mount --make-rshared /
	2025-09-17T00:10:53.262677319Z  + retryable_fix_cgroup
	2025-09-17T00:10:53.263086798Z  ++ seq 0 10
	2025-09-17T00:10:53.263943022Z  + for i in $(seq 0 10)
	2025-09-17T00:10:53.264087299Z  + fix_cgroup
	2025-09-17T00:10:53.264364808Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2025-09-17T00:10:53.264375248Z  + echo 'INFO: detected cgroup v2'
	2025-09-17T00:10:53.264378598Z  INFO: detected cgroup v2
	2025-09-17T00:10:53.264393357Z  + return
	2025-09-17T00:10:53.264395969Z  + return
	2025-09-17T00:10:53.264398581Z  + fix_machine_id
	2025-09-17T00:10:53.264400933Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2025-09-17T00:10:53.264403302Z  INFO: clearing and regenerating /etc/machine-id
	2025-09-17T00:10:53.264405927Z  + rm -f /etc/machine-id
	2025-09-17T00:10:53.265289170Z  + systemd-machine-id-setup
	2025-09-17T00:10:53.268582315Z  Initializing machine ID from random generator.
	2025-09-17T00:10:53.270428144Z  + fix_product_name
	2025-09-17T00:10:53.270443382Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2025-09-17T00:10:53.270504616Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2025-09-17T00:10:53.270510262Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2025-09-17T00:10:53.270513257Z  + echo kind
	2025-09-17T00:10:53.271599836Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2025-09-17T00:10:53.272961873Z  + fix_product_uuid
	2025-09-17T00:10:53.272975832Z  + [[ ! -f /kind/product_uuid ]]
	2025-09-17T00:10:53.272979326Z  + cat /proc/sys/kernel/random/uuid
	2025-09-17T00:10:53.274079028Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2025-09-17T00:10:53.274092709Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2025-09-17T00:10:53.274095216Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2025-09-17T00:10:53.274097452Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2025-09-17T00:10:53.275643605Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2025-09-17T00:10:53.275657880Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2025-09-17T00:10:53.275661222Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2025-09-17T00:10:53.275664456Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2025-09-17T00:10:53.277256783Z  + select_iptables
	2025-09-17T00:10:53.277269324Z  + local mode num_legacy_lines num_nft_lines
	2025-09-17T00:10:53.278458425Z  ++ grep -c '^-'
	2025-09-17T00:10:53.281004923Z  ++ true
	2025-09-17T00:10:53.281134333Z  + num_legacy_lines=0
	2025-09-17T00:10:53.282137271Z  ++ grep -c '^-'
	2025-09-17T00:10:53.288136273Z  + num_nft_lines=6
	2025-09-17T00:10:53.288151507Z  + '[' 0 -ge 6 ']'
	2025-09-17T00:10:53.288154761Z  + mode=nft
	2025-09-17T00:10:53.288157394Z  + echo 'INFO: setting iptables to detected mode: nft'
	2025-09-17T00:10:53.288160036Z  INFO: setting iptables to detected mode: nft
	2025-09-17T00:10:53.288162945Z  + update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-17T00:10:53.288179157Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-nft'
	2025-09-17T00:10:53.288182313Z  + local 'args=--set iptables /usr/sbin/iptables-nft'
	2025-09-17T00:10:53.288605101Z  ++ seq 0 15
	2025-09-17T00:10:53.289394762Z  + for i in $(seq 0 15)
	2025-09-17T00:10:53.289404501Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-17T00:10:53.292028993Z  + return
	2025-09-17T00:10:53.292043761Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-17T00:10:53.292056948Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-17T00:10:53.292108194Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-17T00:10:53.292550952Z  ++ seq 0 15
	2025-09-17T00:10:53.293482707Z  + for i in $(seq 0 15)
	2025-09-17T00:10:53.293497510Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-17T00:10:53.295881124Z  + return
	2025-09-17T00:10:53.295893931Z  + enable_network_magic
	2025-09-17T00:10:53.296033616Z  + local docker_embedded_dns_ip=127.0.0.11
	2025-09-17T00:10:53.296043389Z  + local docker_host_ip
	2025-09-17T00:10:53.297219662Z  ++ cut '-d ' -f1
	2025-09-17T00:10:53.297239378Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:10:53.297302330Z  +++ timeout 5 getent ahostsv4 host.docker.internal
	2025-09-17T00:10:53.333933474Z  + docker_host_ip=
	2025-09-17T00:10:53.333957210Z  + [[ -z '' ]]
	2025-09-17T00:10:53.334723206Z  ++ ip -4 route show default
	2025-09-17T00:10:53.334861120Z  ++ cut '-d ' -f3
	2025-09-17T00:10:53.337226683Z  + docker_host_ip=192.168.49.1
	2025-09-17T00:10:53.338009057Z  + iptables-save
	2025-09-17T00:10:53.338079470Z  + iptables-restore
	2025-09-17T00:10:53.340790398Z  + sed -e 's/-d 127.0.0.11/-d 192.168.49.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.49.1:53/g' -e 's/p -j DNAT --to-destination 127.0.0.11/p --dport 53 -j DNAT --to-destination 127.0.0.11/g'
	2025-09-17T00:10:53.352819115Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2025-09-17T00:10:53.354508340Z  ++ sed -e s/127.0.0.11/192.168.49.1/g /etc/resolv.conf.original
	2025-09-17T00:10:53.355592330Z  + replaced='# Generated by Docker Engine.
	2025-09-17T00:10:53.355604071Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-17T00:10:53.355608093Z  # has been modified.
	2025-09-17T00:10:53.355611078Z  
	2025-09-17T00:10:53.355613705Z  nameserver 192.168.49.1
	2025-09-17T00:10:53.355616683Z  search local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-17T00:10:53.355619344Z  options edns0 trust-ad ndots:0
	2025-09-17T00:10:53.355629577Z  
	2025-09-17T00:10:53.355632694Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-17T00:10:53.355635373Z  # ExtServers: [host(127.0.0.53)]
	2025-09-17T00:10:53.355638211Z  # Overrides: []
	2025-09-17T00:10:53.355640567Z  # Option ndots from: internal'
	2025-09-17T00:10:53.355643321Z  + [[ '' == '' ]]
	2025-09-17T00:10:53.355645855Z  + echo '# Generated by Docker Engine.
	2025-09-17T00:10:53.355648262Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-17T00:10:53.355650966Z  # has been modified.
	2025-09-17T00:10:53.355653170Z  
	2025-09-17T00:10:53.355655313Z  nameserver 192.168.49.1
	2025-09-17T00:10:53.355657711Z  search local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-17T00:10:53.355660411Z  options edns0 trust-ad ndots:0
	2025-09-17T00:10:53.355662905Z  
	2025-09-17T00:10:53.355665376Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-17T00:10:53.355668258Z  # ExtServers: [host(127.0.0.53)]
	2025-09-17T00:10:53.355671271Z  # Overrides: []
	2025-09-17T00:10:53.355675277Z  # Option ndots from: internal'
	2025-09-17T00:10:53.355827628Z  + files_to_update=('/etc/kubernetes/manifests/etcd.yaml' '/etc/kubernetes/manifests/kube-apiserver.yaml' '/etc/kubernetes/manifests/kube-controller-manager.yaml' '/etc/kubernetes/manifests/kube-scheduler.yaml' '/etc/kubernetes/controller-manager.conf' '/etc/kubernetes/scheduler.conf' '/kind/kubeadm.conf' '/var/lib/kubelet/kubeadm-flags.env')
	2025-09-17T00:10:53.355835783Z  + local files_to_update
	2025-09-17T00:10:53.355838759Z  + local should_fix_certificate=false
	2025-09-17T00:10:53.356985276Z  ++ cut '-d ' -f1
	2025-09-17T00:10:53.356999074Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:10:53.357481616Z  ++++ hostname
	2025-09-17T00:10:53.358396478Z  +++ timeout 5 getent ahostsv4 ha-472903-m04
	2025-09-17T00:10:53.360915759Z  + curr_ipv4=192.168.49.5
	2025-09-17T00:10:53.360927630Z  + echo 'INFO: Detected IPv4 address: 192.168.49.5'
	2025-09-17T00:10:53.360930889Z  INFO: Detected IPv4 address: 192.168.49.5
	2025-09-17T00:10:53.360934055Z  + '[' -f /kind/old-ipv4 ']'
	2025-09-17T00:10:53.360947810Z  + [[ -n 192.168.49.5 ]]
	2025-09-17T00:10:53.360950900Z  + echo -n 192.168.49.5
	2025-09-17T00:10:53.362069858Z  ++ cut '-d ' -f1
	2025-09-17T00:10:53.362226287Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:10:53.362683449Z  ++++ hostname
	2025-09-17T00:10:53.363465752Z  +++ timeout 5 getent ahostsv6 ha-472903-m04
	2025-09-17T00:10:53.365854458Z  + curr_ipv6=
	2025-09-17T00:10:53.365866442Z  + echo 'INFO: Detected IPv6 address: '
	2025-09-17T00:10:53.365875781Z  INFO: Detected IPv6 address: 
	2025-09-17T00:10:53.365877639Z  + '[' -f /kind/old-ipv6 ']'
	2025-09-17T00:10:53.365879245Z  + [[ -n '' ]]
	2025-09-17T00:10:53.365881212Z  + false
	2025-09-17T00:10:53.366327444Z  ++ uname -a
	2025-09-17T00:10:53.367068496Z  + echo 'entrypoint completed: Linux ha-472903-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux'
	2025-09-17T00:10:53.367078447Z  entrypoint completed: Linux ha-472903-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	2025-09-17T00:10:53.367080635Z  + exec /sbin/init
	2025-09-17T00:10:53.372992729Z  systemd 249.11-0ubuntu3.16 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
	2025-09-17T00:10:53.373011112Z  Detected virtualization docker.
	2025-09-17T00:10:53.373013587Z  Detected architecture x86-64.
	2025-09-17T00:10:53.373149632Z  
	2025-09-17T00:10:53.373158515Z  Welcome to Ubuntu 22.04.5 LTS!
	2025-09-17T00:10:53.373162003Z  
	2025-09-17T00:10:53.373572587Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:10:53.373584045Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:10:53.373588387Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:10:53.373592232Z  Exiting PID 1...
	
	-- /stdout --
	I0917 00:10:53.762393  821000 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:10:53.816486  821000 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:10:53.807457503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:10:53.816574  821000 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:10:53.807457503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux A
rchitecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:fals
e Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:10:53.816654  821000 network_create.go:284] running [docker network inspect ha-472903-m04] to gather additional debugging logs...
	I0917 00:10:53.816680  821000 cli_runner.go:164] Run: docker network inspect ha-472903-m04
	W0917 00:10:53.833892  821000 cli_runner.go:211] docker network inspect ha-472903-m04 returned with exit code 1
	I0917 00:10:53.833939  821000 network_create.go:287] error running [docker network inspect ha-472903-m04]: docker network inspect ha-472903-m04: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-472903-m04 not found
	I0917 00:10:53.833960  821000 network_create.go:289] output of [docker network inspect ha-472903-m04]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-472903-m04 not found
	
	** /stderr **
	I0917 00:10:53.834032  821000 client.go:171] duration metric: took 5.539713276s to LocalClient.Create
	I0917 00:10:55.834620  821000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:10:55.834696  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:10:55.851424  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:10:55.851554  821000 retry.go:31] will retry after 294.523683ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:10:56.147089  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:10:56.164741  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:10:56.164862  821000 retry.go:31] will retry after 257.986552ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:10:56.423366  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:10:56.441481  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:10:56.441581  821000 retry.go:31] will retry after 317.754745ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:10:56.760170  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:10:56.777286  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:10:56.777438  821000 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:10:56.777466  821000 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:10:56.777528  821000 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:10:56.777572  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:10:56.794542  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:10:56.794665  821000 retry.go:31] will retry after 153.455344ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:10:56.949035  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:10:56.966089  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:10:56.966211  821000 retry.go:31] will retry after 533.511946ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:10:57.500663  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:10:57.519398  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:10:57.519535  821000 retry.go:31] will retry after 737.729214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:10:58.258093  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:10:58.276193  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:10:58.276336  821000 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:10:58.276363  821000 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:10:58.276382  821000 start.go:128] duration metric: took 9.983849407s to createHost
	I0917 00:10:58.276395  821000 start.go:83] releasing machines lock for "ha-472903-m04", held for 9.984000542s
	W0917 00:10:58.276426  821000 start.go:714] error starting host: creating host: create: creating: prepare kic ssh: container name "ha-472903-m04" state Stopped: log: 2025-09-17T00:10:53.373572587Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:10:53.373584045Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:10:53.373588387Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:10:53.373592232Z  Exiting PID 1...: container exited unexpectedly
	I0917 00:10:58.276829  821000 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:10:58.292653  821000 stop.go:39] StopHost: ha-472903-m04
	W0917 00:10:58.292937  821000 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0917 00:10:58.294365  821000 out.go:179] * Stopping node "ha-472903-m04"  ...
	I0917 00:10:58.295455  821000 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:10:58.312123  821000 stop.go:87] host is in state Stopped
	I0917 00:10:58.312161  821000 main.go:141] libmachine: Stopping "ha-472903-m04"...
	I0917 00:10:58.312232  821000 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:10:58.331314  821000 stop.go:66] stop err: Machine "ha-472903-m04" is already stopped.
	I0917 00:10:58.331361  821000 stop.go:69] host is already stopped
	W0917 00:10:59.331533  821000 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0917 00:10:59.333189  821000 out.go:179] * Deleting "ha-472903-m04" in docker ...
	I0917 00:10:59.334259  821000 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-472903-m04
	I0917 00:10:59.351486  821000 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:10:59.369086  821000 cli_runner.go:164] Run: docker exec --privileged -t ha-472903-m04 /bin/bash -c "sudo init 0"
	W0917 00:10:59.386752  821000 cli_runner.go:211] docker exec --privileged -t ha-472903-m04 /bin/bash -c "sudo init 0" returned with exit code 1
	I0917 00:10:59.386783  821000 oci.go:659] error shutdown ha-472903-m04: docker exec --privileged -t ha-472903-m04 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container c1cb7be46c63273125821905eca7927f91d4191029f350af0778b4a946ccc8b3 is not running
	I0917 00:11:00.386931  821000 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:11:00.405032  821000 oci.go:667] container ha-472903-m04 status is Stopped
	I0917 00:11:00.405063  821000 oci.go:679] Successfully shutdown container ha-472903-m04
	I0917 00:11:00.405131  821000 cli_runner.go:164] Run: docker rm -f -v ha-472903-m04
	I0917 00:11:00.427494  821000 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-472903-m04
	W0917 00:11:00.443400  821000 cli_runner.go:211] docker container inspect -f {{.Id}} ha-472903-m04 returned with exit code 1
	I0917 00:11:00.443512  821000 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:11:00.458739  821000 cli_runner.go:164] Run: docker network rm ha-472903
	W0917 00:11:00.475093  821000 cli_runner.go:211] docker network rm ha-472903 returned with exit code 1
	W0917 00:11:00.475214  821000 kic.go:390] failed to remove network (which might be okay) ha-472903: unable to delete a network that is attached to a running container
	W0917 00:11:00.475429  821000 out.go:285] ! StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: container name "ha-472903-m04" state Stopped: log: 2025-09-17T00:10:53.373572587Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:10:53.373584045Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:10:53.373588387Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:10:53.373592232Z  Exiting PID 1...: container exited unexpectedly
	! StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: container name "ha-472903-m04" state Stopped: log: 2025-09-17T00:10:53.373572587Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:10:53.373584045Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:10:53.373588387Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:10:53.373592232Z  Exiting PID 1...: container exited unexpectedly
	I0917 00:11:00.475455  821000 start.go:729] Will try again in 5 seconds ...
	I0917 00:11:05.476553  821000 start.go:360] acquireMachinesLock for ha-472903-m04: {Name:mkdbbd0d5b3cd7ad4b13d37f2d562d6d6421c5de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:11:05.476699  821000 start.go:364] duration metric: took 61.607µs to acquireMachinesLock for "ha-472903-m04"
	I0917 00:11:05.476735  821000 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0917 00:11:05.476860  821000 start.go:125] createHost starting for "m04" (driver="docker")
	I0917 00:11:05.478497  821000 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:11:05.478629  821000 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0917 00:11:05.478662  821000 client.go:168] LocalClient.Create starting
	I0917 00:11:05.478734  821000 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0917 00:11:05.478773  821000 main.go:141] libmachine: Decoding PEM data...
	I0917 00:11:05.478788  821000 main.go:141] libmachine: Parsing certificate...
	I0917 00:11:05.478858  821000 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0917 00:11:05.478882  821000 main.go:141] libmachine: Decoding PEM data...
	I0917 00:11:05.478891  821000 main.go:141] libmachine: Parsing certificate...
	I0917 00:11:05.479118  821000 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:11:05.495669  821000 network_create.go:77] Found existing network {name:ha-472903 subnet:0xc0018d4300 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0917 00:11:05.495699  821000 kic.go:121] calculated static IP "192.168.49.5" for the "ha-472903-m04" container
	I0917 00:11:05.495767  821000 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:11:05.511383  821000 cli_runner.go:164] Run: docker volume create ha-472903-m04 --label name.minikube.sigs.k8s.io=ha-472903-m04 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:11:05.526310  821000 oci.go:103] Successfully created a docker volume ha-472903-m04
	I0917 00:11:05.526373  821000 cli_runner.go:164] Run: docker run --rm --name ha-472903-m04-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m04 --entrypoint /usr/bin/test -v ha-472903-m04:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:11:05.760063  821000 oci.go:107] Successfully prepared a docker volume ha-472903-m04
	I0917 00:11:05.760108  821000 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:11:05.760133  821000 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:11:05.760220  821000 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:11:10.063886  821000 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.303621546s)
	I0917 00:11:10.063917  821000 kic.go:203] duration metric: took 4.303782151s to extract preloaded images to volume ...
	W0917 00:11:10.064008  821000 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:11:10.064037  821000 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:11:10.064072  821000 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:11:10.114726  821000 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903-m04 --name ha-472903-m04 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m04 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903-m04 --network ha-472903 --ip 192.168.49.5 --volume ha-472903-m04:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:11:10.365262  821000 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Running}}
	I0917 00:11:10.384042  821000 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:11:10.401298  821000 cli_runner.go:164] Run: docker exec ha-472903-m04 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:11:10.450243  821000 oci.go:144] the created container "ha-472903-m04" has a running status.
	I0917 00:11:10.450292  821000 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m04/id_rsa...
	I0917 00:11:10.774473  821000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m04/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:11:10.774523  821000 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m04/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:11:10.803963  821000 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:11:10.820453  821000 cli_runner.go:164] Run: docker inspect ha-472903-m04
	I0917 00:11:10.839061  821000 errors.go:84] Postmortem inspect ("docker inspect ha-472903-m04"): -- stdout --
	[
	    {
	        "Id": "8fd0774410ecafc44ddb5fd6937ccf1c84d289610e3cf20709aa5030414262b4",
	        "Created": "2025-09-17T00:11:10.13023227Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 255,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:11:10.161057301Z",
	            "FinishedAt": "2025-09-17T00:11:10.488644564Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/8fd0774410ecafc44ddb5fd6937ccf1c84d289610e3cf20709aa5030414262b4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8fd0774410ecafc44ddb5fd6937ccf1c84d289610e3cf20709aa5030414262b4/hostname",
	        "HostsPath": "/var/lib/docker/containers/8fd0774410ecafc44ddb5fd6937ccf1c84d289610e3cf20709aa5030414262b4/hosts",
	        "LogPath": "/var/lib/docker/containers/8fd0774410ecafc44ddb5fd6937ccf1c84d289610e3cf20709aa5030414262b4/8fd0774410ecafc44ddb5fd6937ccf1c84d289610e3cf20709aa5030414262b4-json.log",
	        "Name": "/ha-472903-m04",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-472903-m04:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-472903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8fd0774410ecafc44ddb5fd6937ccf1c84d289610e3cf20709aa5030414262b4",
	                "LowerDir": "/var/lib/docker/overlay2/b93af5609d3d22c3bd692b969af927056061a36b4d638ab9ceb57e5d84e15d47-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b93af5609d3d22c3bd692b969af927056061a36b4d638ab9ceb57e5d84e15d47/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b93af5609d3d22c3bd692b969af927056061a36b4d638ab9ceb57e5d84e15d47/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b93af5609d3d22c3bd692b969af927056061a36b4d638ab9ceb57e5d84e15d47/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-472903-m04",
	                "Source": "/var/lib/docker/volumes/ha-472903-m04/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-472903-m04",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-472903-m04",
	                "name.minikube.sigs.k8s.io": "ha-472903-m04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-472903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.5"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "22d49b2f397dfabc2a3967bd54b05204a52976e683f65ff07bff00e793040bef",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-472903-m04",
	                        "8fd0774410ec"
	                    ]
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0917 00:11:10.839134  821000 cli_runner.go:164] Run: docker logs --timestamps --details ha-472903-m04
	I0917 00:11:10.857245  821000 errors.go:91] Postmortem logs ("docker logs --timestamps --details ha-472903-m04"): -- stdout --
	2025-09-17T00:11:10.359806059Z  + userns=
	2025-09-17T00:11:10.359840280Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2025-09-17T00:11:10.363054075Z  + validate_userns
	2025-09-17T00:11:10.363070571Z  + [[ -z '' ]]
	2025-09-17T00:11:10.363073873Z  + return
	2025-09-17T00:11:10.363076250Z  + configure_containerd
	2025-09-17T00:11:10.363112403Z  + local snapshotter=
	2025-09-17T00:11:10.363117220Z  + [[ -n '' ]]
	2025-09-17T00:11:10.363120193Z  + [[ -z '' ]]
	2025-09-17T00:11:10.363693323Z  ++ stat -f -c %T /kind
	2025-09-17T00:11:10.364903493Z  + container_filesystem=overlayfs
	2025-09-17T00:11:10.364917340Z  + [[ overlayfs == \z\f\s ]]
	2025-09-17T00:11:10.364920964Z  + [[ -n '' ]]
	2025-09-17T00:11:10.364923767Z  + configure_proxy
	2025-09-17T00:11:10.364926459Z  + mkdir -p /etc/systemd/system.conf.d/
	2025-09-17T00:11:10.367862885Z  + [[ ! -z '' ]]
	2025-09-17T00:11:10.367876968Z  + cat
	2025-09-17T00:11:10.369095910Z  + fix_mount
	2025-09-17T00:11:10.369109530Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2025-09-17T00:11:10.369113094Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2025-09-17T00:11:10.369516662Z  ++ which mount
	2025-09-17T00:11:10.370840253Z  ++ which umount
	2025-09-17T00:11:10.371717900Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2025-09-17T00:11:10.377011136Z  ++ which mount
	2025-09-17T00:11:10.378291209Z  ++ which umount
	2025-09-17T00:11:10.379305898Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2025-09-17T00:11:10.380872470Z  +++ which mount
	2025-09-17T00:11:10.381893918Z  ++ stat -f -c %T /usr/bin/mount
	2025-09-17T00:11:10.382875778Z  + [[ overlayfs == \a\u\f\s ]]
	2025-09-17T00:11:10.382890185Z  + echo 'INFO: remounting /sys read-only'
	2025-09-17T00:11:10.382893701Z  INFO: remounting /sys read-only
	2025-09-17T00:11:10.382896502Z  + mount -o remount,ro /sys
	2025-09-17T00:11:10.384590996Z  + echo 'INFO: making mounts shared'
	2025-09-17T00:11:10.384605800Z  INFO: making mounts shared
	2025-09-17T00:11:10.384609457Z  + mount --make-rshared /
	2025-09-17T00:11:10.385797800Z  + retryable_fix_cgroup
	2025-09-17T00:11:10.386161640Z  ++ seq 0 10
	2025-09-17T00:11:10.387006947Z  + for i in $(seq 0 10)
	2025-09-17T00:11:10.387022041Z  + fix_cgroup
	2025-09-17T00:11:10.387122837Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2025-09-17T00:11:10.387132902Z  + echo 'INFO: detected cgroup v2'
	2025-09-17T00:11:10.387135964Z  INFO: detected cgroup v2
	2025-09-17T00:11:10.387150230Z  + return
	2025-09-17T00:11:10.387153333Z  + return
	2025-09-17T00:11:10.387156018Z  + fix_machine_id
	2025-09-17T00:11:10.387161237Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2025-09-17T00:11:10.387164137Z  INFO: clearing and regenerating /etc/machine-id
	2025-09-17T00:11:10.387166935Z  + rm -f /etc/machine-id
	2025-09-17T00:11:10.388188019Z  + systemd-machine-id-setup
	2025-09-17T00:11:10.391347934Z  Initializing machine ID from random generator.
	2025-09-17T00:11:10.393060131Z  + fix_product_name
	2025-09-17T00:11:10.393074443Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2025-09-17T00:11:10.393076755Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2025-09-17T00:11:10.393078835Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2025-09-17T00:11:10.393080604Z  + echo kind
	2025-09-17T00:11:10.393968339Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2025-09-17T00:11:10.395339442Z  + fix_product_uuid
	2025-09-17T00:11:10.395352008Z  + [[ ! -f /kind/product_uuid ]]
	2025-09-17T00:11:10.395355291Z  + cat /proc/sys/kernel/random/uuid
	2025-09-17T00:11:10.396603728Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2025-09-17T00:11:10.396617418Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2025-09-17T00:11:10.396620813Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2025-09-17T00:11:10.396623706Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2025-09-17T00:11:10.398218266Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2025-09-17T00:11:10.398230988Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2025-09-17T00:11:10.398233935Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2025-09-17T00:11:10.398237546Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2025-09-17T00:11:10.399867951Z  + select_iptables
	2025-09-17T00:11:10.399880960Z  + local mode num_legacy_lines num_nft_lines
	2025-09-17T00:11:10.400716814Z  ++ grep -c '^-'
	2025-09-17T00:11:10.403388522Z  ++ true
	2025-09-17T00:11:10.403658282Z  + num_legacy_lines=0
	2025-09-17T00:11:10.404547799Z  ++ grep -c '^-'
	2025-09-17T00:11:10.409965864Z  + num_nft_lines=6
	2025-09-17T00:11:10.409980463Z  + '[' 0 -ge 6 ']'
	2025-09-17T00:11:10.409983834Z  + mode=nft
	2025-09-17T00:11:10.409986948Z  + echo 'INFO: setting iptables to detected mode: nft'
	2025-09-17T00:11:10.409997143Z  INFO: setting iptables to detected mode: nft
	2025-09-17T00:11:10.410000120Z  + update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-17T00:11:10.410051086Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-nft'
	2025-09-17T00:11:10.410062423Z  + local 'args=--set iptables /usr/sbin/iptables-nft'
	2025-09-17T00:11:10.410520083Z  ++ seq 0 15
	2025-09-17T00:11:10.411324412Z  + for i in $(seq 0 15)
	2025-09-17T00:11:10.411332726Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-17T00:11:10.412504333Z  + return
	2025-09-17T00:11:10.412518812Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-17T00:11:10.412528516Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-17T00:11:10.412531591Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-17T00:11:10.412997162Z  ++ seq 0 15
	2025-09-17T00:11:10.413867960Z  + for i in $(seq 0 15)
	2025-09-17T00:11:10.413882124Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-17T00:11:10.414887110Z  + return
	2025-09-17T00:11:10.414900175Z  + enable_network_magic
	2025-09-17T00:11:10.414971605Z  + local docker_embedded_dns_ip=127.0.0.11
	2025-09-17T00:11:10.414978077Z  + local docker_host_ip
	2025-09-17T00:11:10.416386989Z  ++ cut '-d ' -f1
	2025-09-17T00:11:10.416396510Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:11:10.416398629Z  +++ timeout 5 getent ahostsv4 host.docker.internal
	2025-09-17T00:11:10.450883813Z  + docker_host_ip=
	2025-09-17T00:11:10.450901494Z  + [[ -z '' ]]
	2025-09-17T00:11:10.451638963Z  ++ ip -4 route show default
	2025-09-17T00:11:10.451807937Z  ++ cut '-d ' -f3
	2025-09-17T00:11:10.453837337Z  + docker_host_ip=192.168.49.1
	2025-09-17T00:11:10.454131717Z  + iptables-save
	2025-09-17T00:11:10.454559847Z  + iptables-restore
	2025-09-17T00:11:10.456712698Z  + sed -e 's/-d 127.0.0.11/-d 192.168.49.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.49.1:53/g' -e 's/p -j DNAT --to-destination 127.0.0.11/p --dport 53 -j DNAT --to-destination 127.0.0.11/g'
	2025-09-17T00:11:10.465841895Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2025-09-17T00:11:10.467523286Z  ++ sed -e s/127.0.0.11/192.168.49.1/g /etc/resolv.conf.original
	2025-09-17T00:11:10.468628624Z  + replaced='# Generated by Docker Engine.
	2025-09-17T00:11:10.468640228Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-17T00:11:10.468643626Z  # has been modified.
	2025-09-17T00:11:10.468646497Z  
	2025-09-17T00:11:10.468649192Z  nameserver 192.168.49.1
	2025-09-17T00:11:10.468651964Z  search local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-17T00:11:10.468655105Z  options edns0 trust-ad ndots:0
	2025-09-17T00:11:10.468667349Z  
	2025-09-17T00:11:10.468670257Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-17T00:11:10.468673564Z  # ExtServers: [host(127.0.0.53)]
	2025-09-17T00:11:10.468675809Z  # Overrides: []
	2025-09-17T00:11:10.468678397Z  # Option ndots from: internal'
	2025-09-17T00:11:10.468680896Z  + [[ '' == '' ]]
	2025-09-17T00:11:10.468683412Z  + echo '# Generated by Docker Engine.
	2025-09-17T00:11:10.468686136Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-17T00:11:10.468688877Z  # has been modified.
	2025-09-17T00:11:10.468691928Z  
	2025-09-17T00:11:10.468694614Z  nameserver 192.168.49.1
	2025-09-17T00:11:10.468697394Z  search local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-17T00:11:10.468700302Z  options edns0 trust-ad ndots:0
	2025-09-17T00:11:10.468703089Z  
	2025-09-17T00:11:10.468705782Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-17T00:11:10.468708696Z  # ExtServers: [host(127.0.0.53)]
	2025-09-17T00:11:10.468711487Z  # Overrides: []
	2025-09-17T00:11:10.468714117Z  # Option ndots from: internal'
	2025-09-17T00:11:10.468861949Z  + files_to_update=('/etc/kubernetes/manifests/etcd.yaml' '/etc/kubernetes/manifests/kube-apiserver.yaml' '/etc/kubernetes/manifests/kube-controller-manager.yaml' '/etc/kubernetes/manifests/kube-scheduler.yaml' '/etc/kubernetes/controller-manager.conf' '/etc/kubernetes/scheduler.conf' '/kind/kubeadm.conf' '/var/lib/kubelet/kubeadm-flags.env')
	2025-09-17T00:11:10.468871952Z  + local files_to_update
	2025-09-17T00:11:10.468874340Z  + local should_fix_certificate=false
	2025-09-17T00:11:10.469990495Z  ++ cut '-d ' -f1
	2025-09-17T00:11:10.470104657Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:11:10.470646549Z  ++++ hostname
	2025-09-17T00:11:10.471378086Z  +++ timeout 5 getent ahostsv4 ha-472903-m04
	2025-09-17T00:11:10.473892286Z  + curr_ipv4=192.168.49.5
	2025-09-17T00:11:10.473906861Z  + echo 'INFO: Detected IPv4 address: 192.168.49.5'
	2025-09-17T00:11:10.473910477Z  INFO: Detected IPv4 address: 192.168.49.5
	2025-09-17T00:11:10.473913174Z  + '[' -f /kind/old-ipv4 ']'
	2025-09-17T00:11:10.473931734Z  + [[ -n 192.168.49.5 ]]
	2025-09-17T00:11:10.473941070Z  + echo -n 192.168.49.5
	2025-09-17T00:11:10.475057993Z  ++ cut '-d ' -f1
	2025-09-17T00:11:10.475148053Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:11:10.475684843Z  ++++ hostname
	2025-09-17T00:11:10.476377193Z  +++ timeout 5 getent ahostsv6 ha-472903-m04
	2025-09-17T00:11:10.478666648Z  + curr_ipv6=
	2025-09-17T00:11:10.478680018Z  + echo 'INFO: Detected IPv6 address: '
	2025-09-17T00:11:10.478691451Z  INFO: Detected IPv6 address: 
	2025-09-17T00:11:10.478694481Z  + '[' -f /kind/old-ipv6 ']'
	2025-09-17T00:11:10.478711344Z  + [[ -n '' ]]
	2025-09-17T00:11:10.478721760Z  + false
	2025-09-17T00:11:10.479227338Z  ++ uname -a
	2025-09-17T00:11:10.480003266Z  + echo 'entrypoint completed: Linux ha-472903-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux'
	2025-09-17T00:11:10.480016022Z  entrypoint completed: Linux ha-472903-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	2025-09-17T00:11:10.480018460Z  + exec /sbin/init
	2025-09-17T00:11:10.485821724Z  systemd 249.11-0ubuntu3.16 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
	2025-09-17T00:11:10.485835634Z  Detected virtualization docker.
	2025-09-17T00:11:10.485838717Z  Detected architecture x86-64.
	2025-09-17T00:11:10.485970815Z  
	2025-09-17T00:11:10.485977948Z  Welcome to Ubuntu 22.04.5 LTS!
	2025-09-17T00:11:10.485980211Z  
	2025-09-17T00:11:10.486351651Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:11:10.486361344Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:11:10.486363593Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:11:10.486365478Z  Exiting PID 1...
	
	-- /stdout --
	I0917 00:11:10.857329  821000 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:11:10.908903  821000 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:11:10.89986207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:11:10.908987  821000 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:11:10.89986207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Ar
chitecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false
Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:11:10.909103  821000 network_create.go:284] running [docker network inspect ha-472903-m04] to gather additional debugging logs...
	I0917 00:11:10.909125  821000 cli_runner.go:164] Run: docker network inspect ha-472903-m04
	W0917 00:11:10.925100  821000 cli_runner.go:211] docker network inspect ha-472903-m04 returned with exit code 1
	I0917 00:11:10.925136  821000 network_create.go:287] error running [docker network inspect ha-472903-m04]: docker network inspect ha-472903-m04: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-472903-m04 not found
	I0917 00:11:10.925149  821000 network_create.go:289] output of [docker network inspect ha-472903-m04]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-472903-m04 not found
	
	** /stderr **
	I0917 00:11:10.925205  821000 client.go:171] duration metric: took 5.446532465s to LocalClient.Create
	I0917 00:11:12.925588  821000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:11:12.925694  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:11:12.944637  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:11:12.944749  821000 retry.go:31] will retry after 196.274886ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:13.142103  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:11:13.159669  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:11:13.159776  821000 retry.go:31] will retry after 471.266424ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:13.632096  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:11:13.649950  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:11:13.650070  821000 retry.go:31] will retry after 743.454884ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:14.393750  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:11:14.411203  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:11:14.411337  821000 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:11:14.411354  821000 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:14.411401  821000 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:11:14.411475  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:11:14.427696  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:11:14.427837  821000 retry.go:31] will retry after 130.635051ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:14.559217  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:11:14.577115  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:11:14.577248  821000 retry.go:31] will retry after 219.795574ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:14.797713  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:11:14.814708  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:11:14.814854  821000 retry.go:31] will retry after 364.40629ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:15.179388  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:11:15.198054  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:11:15.198165  821000 retry.go:31] will retry after 582.113186ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:15.780923  821000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:11:15.798171  821000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:11:15.798300  821000 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:11:15.798318  821000 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:15.798331  821000 start.go:128] duration metric: took 10.321462511s to createHost
	I0917 00:11:15.798343  821000 start.go:83] releasing machines lock for "ha-472903-m04", held for 10.321628588s
	W0917 00:11:15.798469  821000 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-472903" may fix it: creating host: create: creating: prepare kic ssh: container name "ha-472903-m04" state Stopped: log: 2025-09-17T00:11:10.486351651Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:11:10.486361344Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:11:10.486363593Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:11:10.486365478Z  Exiting PID 1...: container exited unexpectedly
	* Failed to start docker container. Running "minikube delete -p ha-472903" may fix it: creating host: create: creating: prepare kic ssh: container name "ha-472903-m04" state Stopped: log: 2025-09-17T00:11:10.486351651Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:11:10.486361344Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:11:10.486363593Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:11:10.486365478Z  Exiting PID 1...: container exited unexpectedly
	I0917 00:11:15.800469  821000 out.go:203] 
	W0917 00:11:15.801528  821000 out.go:285] X Exiting due to GUEST_PROVISION_EXIT_UNEXPECTED: Failed to start host: creating host: create: creating: prepare kic ssh: container name "ha-472903-m04" state Stopped: log: 2025-09-17T00:11:10.486351651Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:11:10.486361344Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:11:10.486363593Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:11:10.486365478Z  Exiting PID 1...: container exited unexpectedly
	X Exiting due to GUEST_PROVISION_EXIT_UNEXPECTED: Failed to start host: creating host: create: creating: prepare kic ssh: container name "ha-472903-m04" state Stopped: log: 2025-09-17T00:11:10.486351651Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:11:10.486361344Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:11:10.486363593Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:11:10.486365478Z  Exiting PID 1...: container exited unexpectedly
	I0917 00:11:15.802495  821000 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-472903 node add --alsologtostderr -v 5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-472903
helpers_test.go:243: (dbg) docker inspect ha-472903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	        "Created": "2025-09-16T23:56:35.178831158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 804802,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:56:35.209552026Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hostname",
	        "HostsPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hosts",
	        "LogPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047-json.log",
	        "Name": "/ha-472903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-472903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-472903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	                "LowerDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-472903",
	                "Source": "/var/lib/docker/volumes/ha-472903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-472903",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-472903",
	                "name.minikube.sigs.k8s.io": "ha-472903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "abe382ce28757e80b5cdae91a64217d3672b21c23f3517480bd53105aeca147e",
	            "SandboxKey": "/var/run/docker/netns/abe382ce2875",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33544"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33545"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33548"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33546"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33547"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-472903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:42:9f:f6:50:c2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "22d49b2f397dfabc2a3967bd54b05204a52976e683f65ff07bff00e793040bef",
	                    "EndpointID": "4d4d83129a167c8183e8ef58cc6057f613d8d69adf59710ba6c623d1ff2970c6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-472903",
	                        "05f03528ecc5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-472903 -n ha-472903
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-472903 logs -n 25: (1.104004268s)
helpers_test.go:260: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:08 UTC │ 17 Sep 25 00:08 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:08 UTC │ 17 Sep 25 00:08 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:08 UTC │ 17 Sep 25 00:08 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:09 UTC │ 17 Sep 25 00:09 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:09 UTC │ 17 Sep 25 00:09 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:09 UTC │ 17 Sep 25 00:09 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:09 UTC │ 17 Sep 25 00:09 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- nslookup kubernetes.io                                              │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- nslookup kubernetes.io                                              │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- nslookup kubernetes.io                                              │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │                     │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- nslookup kubernetes.default                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- nslookup kubernetes.default                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- nslookup kubernetes.default                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │                     │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- nslookup kubernetes.default.svc.cluster.local                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- nslookup kubernetes.default.svc.cluster.local                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- nslookup kubernetes.default.svc.cluster.local                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │                     │
	│ kubectl │ ha-472903 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-4jfjt -- sh -c ping -c 1 192.168.49.1                                        │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-6hrm6 -- sh -c ping -c 1 192.168.49.1                                        │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │ 17 Sep 25 00:10 UTC │
	│ kubectl │ ha-472903 kubectl -- exec busybox-7b57f96db7-mknzs -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │                     │
	│ node    │ ha-472903 node add --alsologtostderr -v 5                                                                                 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:56:30
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:56:30.301112  804231 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:56:30.301322  804231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:30.301330  804231 out.go:374] Setting ErrFile to fd 2...
	I0916 23:56:30.301335  804231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:30.301535  804231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0916 23:56:30.302024  804231 out.go:368] Setting JSON to false
	I0916 23:56:30.302925  804231 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9532,"bootTime":1758057458,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:56:30.303027  804231 start.go:140] virtualization: kvm guest
	I0916 23:56:30.304965  804231 out.go:179] * [ha-472903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:56:30.306181  804231 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:56:30.306189  804231 notify.go:220] Checking for updates...
	I0916 23:56:30.308309  804231 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:56:30.309530  804231 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0916 23:56:30.310577  804231 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0916 23:56:30.311523  804231 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:56:30.312490  804231 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:56:30.313634  804231 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:56:30.336203  804231 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:56:30.336330  804231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:30.390690  804231 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:56:30.380521507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:30.390801  804231 docker.go:318] overlay module found
	I0916 23:56:30.392435  804231 out.go:179] * Using the docker driver based on user configuration
	I0916 23:56:30.393493  804231 start.go:304] selected driver: docker
	I0916 23:56:30.393505  804231 start.go:918] validating driver "docker" against <nil>
	I0916 23:56:30.393517  804231 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:56:30.394092  804231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:30.448140  804231 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:56:30.438500908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:30.448302  804231 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:56:30.448529  804231 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:56:30.450143  804231 out.go:179] * Using Docker driver with root privileges
	I0916 23:56:30.451156  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:30.451216  804231 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 23:56:30.451226  804231 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:56:30.451301  804231 start.go:348] cluster config:
	{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m
0s}
	I0916 23:56:30.452491  804231 out.go:179] * Starting "ha-472903" primary control-plane node in "ha-472903" cluster
	I0916 23:56:30.453469  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:56:30.454617  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:30.455626  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:30.455658  804231 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0916 23:56:30.455669  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:30.455737  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:30.455747  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:30.455875  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:56:30.456208  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:30.456245  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json: {Name:mkb16495f6ef626fa58a9600f3b4a943b5aaf14d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:30.475568  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:30.475587  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:30.475611  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:30.475644  804231 start.go:360] acquireMachinesLock for ha-472903: {Name:mk994658ce3314f2aed1dec341debc49d36a4326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:30.475759  804231 start.go:364] duration metric: took 97.738µs to acquireMachinesLock for "ha-472903"
	I0916 23:56:30.475786  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:30.475881  804231 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:56:30.477680  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:30.477953  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:56:30.477986  804231 client.go:168] LocalClient.Create starting
	I0916 23:56:30.478060  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:56:30.478097  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:30.478118  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:30.478203  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:56:30.478234  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:30.478247  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:30.478706  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:56:30.494743  804231 cli_runner.go:211] docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:56:30.494806  804231 network_create.go:284] running [docker network inspect ha-472903] to gather additional debugging logs...
	I0916 23:56:30.494829  804231 cli_runner.go:164] Run: docker network inspect ha-472903
	W0916 23:56:30.510851  804231 cli_runner.go:211] docker network inspect ha-472903 returned with exit code 1
	I0916 23:56:30.510886  804231 network_create.go:287] error running [docker network inspect ha-472903]: docker network inspect ha-472903: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-472903 not found
	I0916 23:56:30.510919  804231 network_create.go:289] output of [docker network inspect ha-472903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-472903 not found
	
	** /stderr **
	I0916 23:56:30.511007  804231 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:30.527272  804231 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b12870}
	I0916 23:56:30.527312  804231 network_create.go:124] attempt to create docker network ha-472903 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:56:30.527357  804231 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-472903 ha-472903
	I0916 23:56:30.581246  804231 network_create.go:108] docker network ha-472903 192.168.49.0/24 created
	I0916 23:56:30.581278  804231 kic.go:121] calculated static IP "192.168.49.2" for the "ha-472903" container
	I0916 23:56:30.581331  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:30.597113  804231 cli_runner.go:164] Run: docker volume create ha-472903 --label name.minikube.sigs.k8s.io=ha-472903 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:30.614615  804231 oci.go:103] Successfully created a docker volume ha-472903
	I0916 23:56:30.614694  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903 --entrypoint /usr/bin/test -v ha-472903:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:30.983301  804231 oci.go:107] Successfully prepared a docker volume ha-472903
	I0916 23:56:30.983346  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:30.983369  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:30.983457  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:56:35.109877  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.126378793s)
	I0916 23:56:35.109930  804231 kic.go:203] duration metric: took 4.126557088s to extract preloaded images to volume ...
	W0916 23:56:35.110010  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:56:35.110041  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:56:35.110081  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:56:35.162423  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903 --name ha-472903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903 --network ha-472903 --ip 192.168.49.2 --volume ha-472903:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:56:35.411448  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Running}}
	I0916 23:56:35.428877  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.447492  804231 cli_runner.go:164] Run: docker exec ha-472903 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:56:35.490145  804231 oci.go:144] the created container "ha-472903" has a running status.
	I0916 23:56:35.490177  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa...
	I0916 23:56:35.748917  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:56:35.748974  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:56:35.776040  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.795374  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:56:35.795403  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:56:35.841194  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.859165  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:56:35.859278  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:35.877348  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:35.877637  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:35.877654  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:56:36.014327  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0916 23:56:36.014356  804231 ubuntu.go:182] provisioning hostname "ha-472903"
	I0916 23:56:36.014430  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.033295  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:36.033543  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:36.033558  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903 && echo "ha-472903" | sudo tee /etc/hostname
	I0916 23:56:36.178557  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0916 23:56:36.178627  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.196584  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:36.196791  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:36.196814  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:56:36.331895  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:56:36.331954  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:56:36.331987  804231 ubuntu.go:190] setting up certificates
	I0916 23:56:36.332000  804231 provision.go:84] configureAuth start
	I0916 23:56:36.332062  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.350923  804231 provision.go:143] copyHostCerts
	I0916 23:56:36.350968  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:56:36.351011  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:56:36.351021  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:56:36.351100  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:56:36.351216  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:56:36.351254  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:56:36.351265  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:56:36.351307  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:56:36.351374  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:56:36.351400  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:56:36.351409  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:56:36.351461  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:56:36.351538  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903 san=[127.0.0.1 192.168.49.2 ha-472903 localhost minikube]
	I0916 23:56:36.406870  804231 provision.go:177] copyRemoteCerts
	I0916 23:56:36.406927  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:56:36.406977  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.424064  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.520663  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:56:36.520737  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:56:36.546100  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:56:36.546162  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 23:56:36.569886  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:56:36.569946  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:56:36.593694  804231 provision.go:87] duration metric: took 261.676108ms to configureAuth
	I0916 23:56:36.593725  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:56:36.593891  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:36.593903  804231 machine.go:96] duration metric: took 734.71199ms to provisionDockerMachine
	I0916 23:56:36.593911  804231 client.go:171] duration metric: took 6.115914604s to LocalClient.Create
	I0916 23:56:36.593933  804231 start.go:167] duration metric: took 6.115991162s to libmachine.API.Create "ha-472903"
	I0916 23:56:36.593942  804231 start.go:293] postStartSetup for "ha-472903" (driver="docker")
	I0916 23:56:36.593950  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:56:36.593994  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:56:36.594038  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.611127  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.708294  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:56:36.711629  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:56:36.711662  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:56:36.711669  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:56:36.711677  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:56:36.711690  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:56:36.711734  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:56:36.711817  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:56:36.711829  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:56:36.711917  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:56:36.720521  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:56:36.746614  804231 start.go:296] duration metric: took 152.657806ms for postStartSetup
	I0916 23:56:36.746970  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.763912  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:36.764159  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:56:36.764204  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.781099  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.872372  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:56:36.876670  804231 start.go:128] duration metric: took 6.400768235s to createHost
	I0916 23:56:36.876701  804231 start.go:83] releasing machines lock for "ha-472903", held for 6.400928988s
	I0916 23:56:36.876787  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.894080  804231 ssh_runner.go:195] Run: cat /version.json
	I0916 23:56:36.894094  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:56:36.894141  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.894182  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.912628  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.913001  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:37.079386  804231 ssh_runner.go:195] Run: systemctl --version
	I0916 23:56:37.084104  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:56:37.088563  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:56:37.116786  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:56:37.116846  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:56:37.142716  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:56:37.142738  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:56:37.142772  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:56:37.142832  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:56:37.154693  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:56:37.165920  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:56:37.165978  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:56:37.179227  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:56:37.192751  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:56:37.255915  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:56:37.324761  804231 docker.go:234] disabling docker service ...
	I0916 23:56:37.324836  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:56:37.342233  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:56:37.353324  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:56:37.420555  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:56:37.486396  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:56:37.497453  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:56:37.513435  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:56:37.524399  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:56:37.534072  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:56:37.534132  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:56:37.543872  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:56:37.553478  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:56:37.562918  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:56:37.572431  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:56:37.581176  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:56:37.590540  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:56:37.599825  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:56:37.609340  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:56:37.617500  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:56:37.625771  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:56:37.685687  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:56:37.787201  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:56:37.787275  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:56:37.791126  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:56:37.791200  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:56:37.794684  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:56:37.828753  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:56:37.828806  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:56:37.851610  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:56:37.876577  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:56:37.877711  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:37.894044  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:56:37.897995  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:56:37.909702  804231 kubeadm.go:875] updating cluster {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:56:37.909830  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:37.909936  804231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:56:37.943964  804231 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 23:56:37.943985  804231 containerd.go:534] Images already preloaded, skipping extraction
	I0916 23:56:37.944040  804231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:56:37.976374  804231 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 23:56:37.976397  804231 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:56:37.976405  804231 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0916 23:56:37.976525  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:56:37.976590  804231 ssh_runner.go:195] Run: sudo crictl info
	I0916 23:56:38.009585  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:38.009608  804231 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:56:38.009620  804231 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:56:38.009642  804231 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-472903 NodeName:ha-472903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:56:38.009740  804231 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-472903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:56:38.009763  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:56:38.009799  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:56:38.022796  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:56:38.022978  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:56:38.023041  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:56:38.032162  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:56:38.032241  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 23:56:38.040936  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 23:56:38.058672  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:56:38.079097  804231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0916 23:56:38.097183  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0916 23:56:38.116629  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:56:38.120221  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:56:38.131205  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:56:38.195735  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:56:38.216649  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.2
	I0916 23:56:38.216671  804231 certs.go:194] generating shared ca certs ...
	I0916 23:56:38.216692  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.216854  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:56:38.216907  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:56:38.216920  804231 certs.go:256] generating profile certs ...
	I0916 23:56:38.216989  804231 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:56:38.217007  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt with IP's: []
	I0916 23:56:38.286683  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt ...
	I0916 23:56:38.286713  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt: {Name:mk764ef4ac73429cea14d799835f3822d8afb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.286876  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key ...
	I0916 23:56:38.286887  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key: {Name:mk988f40b7ad20c61b4ffc19afd15eea50787a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.286965  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8
	I0916 23:56:38.286981  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0916 23:56:38.411782  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 ...
	I0916 23:56:38.411812  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8: {Name:mkbca9fcc4cd73eb913b43ef67240975ba048601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.411977  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8 ...
	I0916 23:56:38.411990  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8: {Name:mk56f7fb29011c6372caaf96dfdbcab1b202e8b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.412061  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:56:38.412138  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:56:38.412190  804231 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:56:38.412204  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt with IP's: []
	I0916 23:56:38.735728  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt ...
	I0916 23:56:38.735759  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt: {Name:mke25602938652bbe51197bb8e5738dfc5dca50b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.735935  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key ...
	I0916 23:56:38.735947  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key: {Name:mkc7d616357a8be8181d43ca8cb33ab512ce94dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.736027  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:56:38.736044  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:56:38.736055  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:56:38.736068  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:56:38.736078  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:56:38.736090  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:56:38.736105  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:56:38.736115  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:56:38.736175  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:56:38.736210  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:56:38.736218  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:56:38.736242  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:56:38.736266  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:56:38.736284  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:56:38.736322  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:56:38.736347  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:56:38.736360  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:38.736372  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:56:38.736905  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:56:38.762142  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:56:38.786590  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:56:38.810694  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:56:38.834521  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 23:56:38.858677  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:56:38.881975  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:56:38.906146  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:56:38.929698  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:56:38.955154  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:56:38.978551  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:56:39.001782  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:56:39.019405  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:56:39.024868  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:56:39.034165  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.038348  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.038407  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.045172  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:56:39.054735  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:56:39.065180  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.068976  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.069038  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.075920  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:56:39.085838  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:56:39.095394  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.098966  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.099019  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.105643  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:56:39.114800  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:56:39.117988  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:56:39.118033  804231 kubeadm.go:392] StartCluster: {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:56:39.118097  804231 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 23:56:39.118132  804231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 23:56:39.154291  804231 cri.go:89] found id: ""
	I0916 23:56:39.154361  804231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:56:39.163485  804231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:56:39.172454  804231 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:56:39.172499  804231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:56:39.181066  804231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:56:39.181098  804231 kubeadm.go:157] found existing configuration files:
	
	I0916 23:56:39.181131  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:56:39.189824  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:56:39.189873  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:56:39.198165  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:56:39.206772  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:56:39.206819  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:56:39.215119  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:56:39.223660  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:56:39.223717  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:56:39.232099  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:56:39.240514  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:56:39.240559  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:56:39.248850  804231 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:56:39.285897  804231 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:56:39.285950  804231 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:56:39.300660  804231 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:56:39.300727  804231 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:56:39.300801  804231 kubeadm.go:310] OS: Linux
	I0916 23:56:39.300901  804231 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:56:39.300975  804231 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:56:39.301037  804231 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:56:39.301080  804231 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:56:39.301127  804231 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:56:39.301169  804231 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:56:39.301211  804231 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:56:39.301268  804231 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:56:39.351787  804231 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:56:39.351909  804231 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:56:39.351995  804231 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:56:39.358062  804231 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:56:39.360794  804231 out.go:252]   - Generating certificates and keys ...
	I0916 23:56:39.360906  804231 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:56:39.360984  804231 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:56:39.805287  804231 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:56:40.002708  804231 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:56:40.279763  804231 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:56:40.813028  804231 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:56:41.074848  804231 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:56:41.075343  804231 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-472903 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:56:41.124880  804231 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:56:41.125041  804231 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-472903 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:56:41.707716  804231 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:56:42.089212  804231 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:56:42.627038  804231 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:56:42.627119  804231 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:56:42.823901  804231 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:56:43.022989  804231 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:56:43.163778  804231 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:56:43.708743  804231 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:56:44.024642  804231 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:56:44.025130  804231 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:56:44.027319  804231 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:56:44.029599  804231 out.go:252]   - Booting up control plane ...
	I0916 23:56:44.029737  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:56:44.029842  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:56:44.030181  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:56:44.039957  804231 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:56:44.040118  804231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:56:44.047794  804231 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:56:44.048177  804231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:56:44.048269  804231 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:56:44.122629  804231 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:56:44.122739  804231 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:56:45.124352  804231 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001822735s
	I0916 23:56:45.127338  804231 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:56:45.127477  804231 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:56:45.127582  804231 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:56:45.127694  804231 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:56:47.478256  804231 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.350892202s
	I0916 23:56:47.717698  804231 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.590223043s
	I0916 23:56:49.129161  804231 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001748341s
	I0916 23:56:49.140036  804231 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:56:49.148779  804231 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:56:49.158010  804231 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:56:49.158279  804231 kubeadm.go:310] [mark-control-plane] Marking the node ha-472903 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:56:49.165085  804231 kubeadm.go:310] [bootstrap-token] Using token: 4apri1.yqe8ok7wc4ltba21
	I0916 23:56:49.166180  804231 out.go:252]   - Configuring RBAC rules ...
	I0916 23:56:49.166328  804231 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:56:49.169225  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:56:49.174527  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:56:49.176741  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:56:49.178892  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:56:49.181107  804231 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:56:49.534440  804231 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:56:49.948567  804231 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:56:50.534581  804231 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:56:50.535429  804231 kubeadm.go:310] 
	I0916 23:56:50.535529  804231 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:56:50.535542  804231 kubeadm.go:310] 
	I0916 23:56:50.535650  804231 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:56:50.535660  804231 kubeadm.go:310] 
	I0916 23:56:50.535696  804231 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:56:50.535801  804231 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:56:50.535858  804231 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:56:50.535872  804231 kubeadm.go:310] 
	I0916 23:56:50.535940  804231 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:56:50.535949  804231 kubeadm.go:310] 
	I0916 23:56:50.536027  804231 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:56:50.536037  804231 kubeadm.go:310] 
	I0916 23:56:50.536125  804231 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:56:50.536212  804231 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:56:50.536280  804231 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:56:50.536286  804231 kubeadm.go:310] 
	I0916 23:56:50.536356  804231 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:56:50.536441  804231 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:56:50.536448  804231 kubeadm.go:310] 
	I0916 23:56:50.536543  804231 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4apri1.yqe8ok7wc4ltba21 \
	I0916 23:56:50.536688  804231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 \
	I0916 23:56:50.536722  804231 kubeadm.go:310] 	--control-plane 
	I0916 23:56:50.536731  804231 kubeadm.go:310] 
	I0916 23:56:50.536842  804231 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:56:50.536857  804231 kubeadm.go:310] 
	I0916 23:56:50.536947  804231 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4apri1.yqe8ok7wc4ltba21 \
	I0916 23:56:50.537084  804231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 
	I0916 23:56:50.539097  804231 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:56:50.539238  804231 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:56:50.539264  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:50.539274  804231 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:56:50.540523  804231 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:56:50.541480  804231 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:56:50.545518  804231 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:56:50.545534  804231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:56:50.563251  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:56:50.762002  804231 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:56:50.762092  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:50.762127  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903 minikube.k8s.io/updated_at=2025_09_16T23_56_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=true
	I0916 23:56:50.771679  804231 ops.go:34] apiserver oom_adj: -16
	I0916 23:56:50.843646  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:51.344428  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:51.844440  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:52.344316  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:52.844594  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:53.343854  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:53.844615  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:54.344057  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:54.844066  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.344374  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.844478  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.927027  804231 kubeadm.go:1105] duration metric: took 5.165002596s to wait for elevateKubeSystemPrivileges
	I0916 23:56:55.927062  804231 kubeadm.go:394] duration metric: took 16.809033965s to StartCluster
	I0916 23:56:55.927081  804231 settings.go:142] acquiring lock: {Name:mk6c1a5bee23e141aad5180323c16c47ed580ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:55.927146  804231 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0916 23:56:55.927785  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:55.928026  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:56:55.928018  804231 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:55.928038  804231 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 23:56:55.928103  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:56:55.928121  804231 addons.go:69] Setting default-storageclass=true in profile "ha-472903"
	I0916 23:56:55.928148  804231 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-472903"
	I0916 23:56:55.928126  804231 addons.go:69] Setting storage-provisioner=true in profile "ha-472903"
	I0916 23:56:55.928222  804231 addons.go:238] Setting addon storage-provisioner=true in "ha-472903"
	I0916 23:56:55.928269  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:56:55.928296  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:55.928610  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.928740  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.954806  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:56:55.955519  804231 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0916 23:56:55.955545  804231 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0916 23:56:55.955543  804231 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0916 23:56:55.955553  804231 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 23:56:55.955611  804231 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0916 23:56:55.955620  804231 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 23:56:55.956096  804231 addons.go:238] Setting addon default-storageclass=true in "ha-472903"
	I0916 23:56:55.956145  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:56:55.956685  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.957279  804231 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:56:55.961536  804231 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:56:55.961557  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:56:55.961614  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:55.979896  804231 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:56:55.979925  804231 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:56:55.979985  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:55.982838  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:55.999402  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:56.011618  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:56:56.095355  804231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:56:56.110814  804231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:56:56.153646  804231 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:56:56.360175  804231 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0916 23:56:56.361116  804231 addons.go:514] duration metric: took 433.076562ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 23:56:56.361149  804231 start.go:246] waiting for cluster config update ...
	I0916 23:56:56.361163  804231 start.go:255] writing updated cluster config ...
	I0916 23:56:56.362407  804231 out.go:203] 
	I0916 23:56:56.363527  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:56.363621  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:56.364993  804231 out.go:179] * Starting "ha-472903-m02" control-plane node in "ha-472903" cluster
	I0916 23:56:56.365873  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:56:56.366751  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:56.367539  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:56.367556  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:56.367630  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:56.367646  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:56.367654  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:56:56.367711  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:56.386547  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:56.386565  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:56.386580  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:56.386607  804231 start.go:360] acquireMachinesLock for ha-472903-m02: {Name:mk81d8c73856cf84ceff1767a1681f3f3cdab773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:56.386700  804231 start.go:364] duration metric: took 70.184µs to acquireMachinesLock for "ha-472903-m02"
	I0916 23:56:56.386738  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:56.386824  804231 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 23:56:56.388402  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:56.388536  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:56:56.388563  804231 client.go:168] LocalClient.Create starting
	I0916 23:56:56.388626  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:56:56.388664  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:56.388687  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:56.388757  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:56:56.388789  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:56.388804  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:56.389042  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:56.404624  804231 network_create.go:77] Found existing network {name:ha-472903 subnet:0xc001d2d140 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:56:56.404653  804231 kic.go:121] calculated static IP "192.168.49.3" for the "ha-472903-m02" container
	I0916 23:56:56.404719  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:56.420231  804231 cli_runner.go:164] Run: docker volume create ha-472903-m02 --label name.minikube.sigs.k8s.io=ha-472903-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:56.436361  804231 oci.go:103] Successfully created a docker volume ha-472903-m02
	I0916 23:56:56.436430  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m02 --entrypoint /usr/bin/test -v ha-472903-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:56.943375  804231 oci.go:107] Successfully prepared a docker volume ha-472903-m02
	I0916 23:56:56.943427  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:56.943455  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:56.943528  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:01.091161  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.147592491s)
	I0916 23:57:01.091197  804231 kic.go:203] duration metric: took 4.147738136s to extract preloaded images to volume ...
	W0916 23:57:01.091312  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:01.091355  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:01.091403  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:01.142900  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903-m02 --name ha-472903-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903-m02 --network ha-472903 --ip 192.168.49.3 --volume ha-472903-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:01.378924  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Running}}
	I0916 23:57:01.396232  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.412927  804231 cli_runner.go:164] Run: docker exec ha-472903-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:01.469205  804231 oci.go:144] the created container "ha-472903-m02" has a running status.
	I0916 23:57:01.469235  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa...
	I0916 23:57:01.517570  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:01.517621  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:01.540818  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.560831  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:01.560858  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:01.615037  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.637921  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:01.638030  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.659741  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.660056  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.660078  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:01.800716  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0916 23:57:01.800749  804231 ubuntu.go:182] provisioning hostname "ha-472903-m02"
	I0916 23:57:01.800817  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.819791  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.820013  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.820030  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m02 && echo "ha-472903-m02" | sudo tee /etc/hostname
	I0916 23:57:01.967539  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0916 23:57:01.967631  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.987814  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.988031  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.988047  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:02.121536  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:02.121571  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:57:02.121588  804231 ubuntu.go:190] setting up certificates
	I0916 23:57:02.121602  804231 provision.go:84] configureAuth start
	I0916 23:57:02.121663  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.139056  804231 provision.go:143] copyHostCerts
	I0916 23:57:02.139098  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:02.139135  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:57:02.139147  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:02.139221  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:57:02.139329  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:02.139362  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:57:02.139372  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:02.139430  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:57:02.139521  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:02.139549  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:57:02.139559  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:02.139599  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:57:02.139690  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m02 san=[127.0.0.1 192.168.49.3 ha-472903-m02 localhost minikube]
	I0916 23:57:02.262354  804231 provision.go:177] copyRemoteCerts
	I0916 23:57:02.262428  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:02.262491  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.279792  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.375833  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:02.375903  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:02.400316  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:02.400373  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:02.422506  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:02.422550  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:57:02.445091  804231 provision.go:87] duration metric: took 323.464176ms to configureAuth
	I0916 23:57:02.445121  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:02.445295  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:02.445313  804231 machine.go:96] duration metric: took 807.372883ms to provisionDockerMachine
	I0916 23:57:02.445320  804231 client.go:171] duration metric: took 6.056751196s to LocalClient.Create
	I0916 23:57:02.445337  804231 start.go:167] duration metric: took 6.056804276s to libmachine.API.Create "ha-472903"
	I0916 23:57:02.445346  804231 start.go:293] postStartSetup for "ha-472903-m02" (driver="docker")
	I0916 23:57:02.445354  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:02.445402  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:02.445461  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.463550  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.559528  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:02.562755  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:02.562780  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:02.562787  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:02.562793  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:02.562803  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:57:02.562847  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:57:02.562920  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:57:02.562930  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:57:02.563018  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:02.571142  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:02.596466  804231 start.go:296] duration metric: took 151.106324ms for postStartSetup
	I0916 23:57:02.596768  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.613316  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:02.613561  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:02.613601  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.632056  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.723085  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:02.727430  804231 start.go:128] duration metric: took 6.340577447s to createHost
	I0916 23:57:02.727453  804231 start.go:83] releasing machines lock for "ha-472903-m02", held for 6.34073897s
	I0916 23:57:02.727519  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.746152  804231 out.go:179] * Found network options:
	I0916 23:57:02.747248  804231 out.go:179]   - NO_PROXY=192.168.49.2
	W0916 23:57:02.748187  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:02.748240  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:02.748311  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:02.748360  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.748367  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:02.748427  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.765286  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.766625  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.856922  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:02.936692  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:02.936761  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:02.961822  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:02.961845  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:57:02.961878  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:02.961919  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:57:02.973318  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:02.983927  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:57:02.983969  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:57:02.996091  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:57:03.009314  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:57:03.072565  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:57:03.140469  804231 docker.go:234] disabling docker service ...
	I0916 23:57:03.140526  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:57:03.157179  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:57:03.167955  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:57:03.233386  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:57:03.296537  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:03.307574  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:03.323754  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:03.334305  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:03.343767  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:03.343826  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:03.353029  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:03.361991  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:03.371206  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:03.380598  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:03.389216  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:03.398125  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:03.407145  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:03.416183  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:03.424123  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:03.432185  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:03.493561  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:03.591942  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:57:03.592010  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:57:03.595710  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:57:03.595768  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:57:03.599108  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:03.633181  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:57:03.633231  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:03.656364  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:03.680150  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:57:03.681177  804231 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:03.682053  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:03.699720  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:03.703306  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:03.714275  804231 mustload.go:65] Loading cluster: ha-472903
	I0916 23:57:03.714452  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:03.714650  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:57:03.730631  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:03.730849  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.3
	I0916 23:57:03.730859  804231 certs.go:194] generating shared ca certs ...
	I0916 23:57:03.730877  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.730987  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:57:03.731023  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:57:03.731032  804231 certs.go:256] generating profile certs ...
	I0916 23:57:03.731092  804231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:57:03.731114  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a
	I0916 23:57:03.731125  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 23:57:03.830248  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a ...
	I0916 23:57:03.830275  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a: {Name:mk3e97859392ca0d50685e4c31c19acd3c590753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.830438  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a ...
	I0916 23:57:03.830453  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a: {Name:mkd3ec6288ef831df369d4ec39839c410f5116ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.830530  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:57:03.830653  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:57:03.830779  804231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:57:03.830794  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:03.830809  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:03.830823  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:03.830836  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:03.830846  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:03.830855  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:03.830864  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:03.830873  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:03.830920  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:57:03.830952  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:03.830962  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:03.830981  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:03.831001  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:03.831021  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:57:03.831058  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:03.831081  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:57:03.831094  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:57:03.831107  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:03.831156  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:03.847964  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:03.934599  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:03.938331  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:03.950286  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:03.953541  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:03.965169  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:03.968351  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:03.979814  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:03.982969  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:03.993972  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:03.997171  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:04.008607  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:04.011687  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 23:57:04.023019  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:04.046509  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:04.069781  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:04.092702  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:04.114933  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 23:57:04.137173  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0916 23:57:04.159280  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:04.181367  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:04.203980  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:57:04.230248  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:57:04.253628  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:04.276223  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:04.293552  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:04.309978  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:04.326237  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:04.342704  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:04.359099  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 23:57:04.375242  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:04.391611  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:57:04.396637  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:57:04.405389  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.408604  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.408651  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.414862  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:04.423583  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:57:04.432421  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.435706  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.435752  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.441863  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:04.450595  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:04.459588  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.462866  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.462907  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.469279  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:04.478135  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:04.481236  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:04.481288  804231 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0916 23:57:04.481383  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:04.481425  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:04.481462  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:04.492937  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:04.492999  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:04.493041  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:04.501084  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:04.501123  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:04.509217  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 23:57:04.525587  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:04.544042  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:04.561542  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:04.564725  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:04.574819  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:04.638378  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:04.659569  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:04.659878  804231 start.go:317] joinCluster: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:04.659986  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:04.660033  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:04.678136  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:04.817608  804231 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:04.817663  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 79akng.11lpa8n1ba4yh5m1 --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0916 23:57:23.327384  804231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 79akng.11lpa8n1ba4yh5m1 --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.509693377s)
	I0916 23:57:23.327447  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:23.521334  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903-m02 minikube.k8s.io/updated_at=2025_09_16T23_57_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=false
	I0916 23:57:23.592991  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-472903-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:23.664899  804231 start.go:319] duration metric: took 19.005017018s to joinCluster
	I0916 23:57:23.664975  804231 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:23.665223  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:23.665877  804231 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:23.666680  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:23.766393  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:23.779164  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:23.779228  804231 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:23.779511  804231 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m02" to be "Ready" ...
	I0916 23:57:24.283593  804231 node_ready.go:49] node "ha-472903-m02" is "Ready"
	I0916 23:57:24.283628  804231 node_ready.go:38] duration metric: took 504.097895ms for node "ha-472903-m02" to be "Ready" ...
	I0916 23:57:24.283648  804231 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:24.283699  804231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:24.295735  804231 api_server.go:72] duration metric: took 630.723924ms to wait for apiserver process to appear ...
	I0916 23:57:24.295758  804231 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:24.295774  804231 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:24.299650  804231 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:24.300537  804231 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:24.300558  804231 api_server.go:131] duration metric: took 4.795429ms to wait for apiserver health ...
	I0916 23:57:24.300566  804231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:24.304572  804231 system_pods.go:59] 19 kube-system pods found
	I0916 23:57:24.304598  804231 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:24.304604  804231 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:24.304608  804231 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:24.304611  804231 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Pending
	I0916 23:57:24.304615  804231 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:24.304621  804231 system_pods.go:61] "kindnet-mwf8l" [8c9533d3-defe-487b-a9b4-0502fb8f2d2a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mwf8l": pod kindnet-mwf8l is being deleted, cannot be assigned to a host)
	I0916 23:57:24.304628  804231 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-q7c7s": pod kindnet-q7c7s is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304639  804231 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:24.304643  804231 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Pending
	I0916 23:57:24.304646  804231 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:24.304650  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Pending
	I0916 23:57:24.304657  804231 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-58lkb": pod kube-proxy-58lkb is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304662  804231 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:24.304666  804231 system_pods.go:61] "kube-proxy-mf26q" [34502b32-75c1-4078-abd2-4e4d625252d8] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-mf26q": pod kube-proxy-mf26q is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304670  804231 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:24.304677  804231 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Pending
	I0916 23:57:24.304679  804231 system_pods.go:61] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:24.304682  804231 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Pending
	I0916 23:57:24.304687  804231 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:24.304694  804231 system_pods.go:74] duration metric: took 4.122792ms to wait for pod list to return data ...
	I0916 23:57:24.304700  804231 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:24.307165  804231 default_sa.go:45] found service account: "default"
	I0916 23:57:24.307183  804231 default_sa.go:55] duration metric: took 2.474442ms for default service account to be created ...
	I0916 23:57:24.307190  804231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:24.310491  804231 system_pods.go:86] 19 kube-system pods found
	I0916 23:57:24.310512  804231 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:24.310517  804231 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:24.310520  804231 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:24.310524  804231 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Pending
	I0916 23:57:24.310527  804231 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:24.310532  804231 system_pods.go:89] "kindnet-mwf8l" [8c9533d3-defe-487b-a9b4-0502fb8f2d2a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mwf8l": pod kindnet-mwf8l is being deleted, cannot be assigned to a host)
	I0916 23:57:24.310556  804231 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-q7c7s": pod kindnet-q7c7s is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310566  804231 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:24.310571  804231 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Pending
	I0916 23:57:24.310576  804231 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:24.310580  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Pending
	I0916 23:57:24.310588  804231 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-58lkb": pod kube-proxy-58lkb is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310591  804231 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:24.310596  804231 system_pods.go:89] "kube-proxy-mf26q" [34502b32-75c1-4078-abd2-4e4d625252d8] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-mf26q": pod kube-proxy-mf26q is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310600  804231 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:24.310603  804231 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Pending
	I0916 23:57:24.310608  804231 system_pods.go:89] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:24.310611  804231 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Pending
	I0916 23:57:24.310614  804231 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:24.310621  804231 system_pods.go:126] duration metric: took 3.426124ms to wait for k8s-apps to be running ...
	I0916 23:57:24.310629  804231 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:24.310666  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:24.322152  804231 system_svc.go:56] duration metric: took 11.515834ms WaitForService to wait for kubelet
	I0916 23:57:24.322176  804231 kubeadm.go:578] duration metric: took 657.167547ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:24.322199  804231 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:24.327707  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:24.327734  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:24.327748  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:24.327754  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:24.327759  804231 node_conditions.go:105] duration metric: took 5.554046ms to run NodePressure ...
	I0916 23:57:24.327772  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:57:24.327803  804231 start.go:255] writing updated cluster config ...
	I0916 23:57:24.329316  804231 out.go:203] 
	I0916 23:57:24.330356  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:24.330485  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:24.331956  804231 out.go:179] * Starting "ha-472903-m03" control-plane node in "ha-472903" cluster
	I0916 23:57:24.332973  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:57:24.333962  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:24.334852  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:57:24.334875  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:24.334942  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:24.334986  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:24.334997  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:57:24.335117  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:24.357217  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:24.357233  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:24.357242  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:24.357267  804231 start.go:360] acquireMachinesLock for ha-472903-m03: {Name:mk61000bb8e4699ca3310a7fc257e30a156b69de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:24.357354  804231 start.go:364] duration metric: took 71.354µs to acquireMachinesLock for "ha-472903-m03"
	I0916 23:57:24.357375  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:24.357498  804231 start.go:125] createHost starting for "m03" (driver="docker")
	I0916 23:57:24.358917  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:24.358994  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:57:24.359023  804231 client.go:168] LocalClient.Create starting
	I0916 23:57:24.359071  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:57:24.359103  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:24.359116  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:24.359164  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:57:24.359182  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:24.359192  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:24.359366  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:24.375654  804231 network_create.go:77] Found existing network {name:ha-472903 subnet:0xc001b33bf0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:24.375684  804231 kic.go:121] calculated static IP "192.168.49.4" for the "ha-472903-m03" container
	I0916 23:57:24.375740  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:24.392165  804231 cli_runner.go:164] Run: docker volume create ha-472903-m03 --label name.minikube.sigs.k8s.io=ha-472903-m03 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:24.408273  804231 oci.go:103] Successfully created a docker volume ha-472903-m03
	I0916 23:57:24.408342  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m03 --entrypoint /usr/bin/test -v ha-472903-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:24.957699  804231 oci.go:107] Successfully prepared a docker volume ha-472903-m03
	I0916 23:57:24.957748  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:57:24.957783  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:24.957856  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:29.095091  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.13717471s)
	I0916 23:57:29.095123  804231 kic.go:203] duration metric: took 4.137337977s to extract preloaded images to volume ...
	W0916 23:57:29.095214  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:29.095253  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:29.095300  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:29.145859  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903-m03 --name ha-472903-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903-m03 --network ha-472903 --ip 192.168.49.4 --volume ha-472903-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:29.392873  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Running}}
	I0916 23:57:29.412389  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:29.430593  804231 cli_runner.go:164] Run: docker exec ha-472903-m03 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:29.476672  804231 oci.go:144] the created container "ha-472903-m03" has a running status.
	I0916 23:57:29.476707  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa...
	I0916 23:57:29.927926  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:29.927968  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:29.954518  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:29.975503  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:29.975522  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:30.023965  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:30.040966  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:30.041051  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.058157  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.058388  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.058400  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:30.190964  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0916 23:57:30.190995  804231 ubuntu.go:182] provisioning hostname "ha-472903-m03"
	I0916 23:57:30.191059  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.208862  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.209123  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.209144  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m03 && echo "ha-472903-m03" | sudo tee /etc/hostname
	I0916 23:57:30.354363  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0916 23:57:30.354466  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.372285  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.372570  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.372590  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:30.504861  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:30.504898  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:57:30.504920  804231 ubuntu.go:190] setting up certificates
	I0916 23:57:30.504933  804231 provision.go:84] configureAuth start
	I0916 23:57:30.504996  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:30.522218  804231 provision.go:143] copyHostCerts
	I0916 23:57:30.522259  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:30.522297  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:57:30.522306  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:30.522369  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:57:30.522483  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:30.522506  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:57:30.522510  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:30.522547  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:57:30.522650  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:30.522673  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:57:30.522678  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:30.522703  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:57:30.522769  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m03 san=[127.0.0.1 192.168.49.4 ha-472903-m03 localhost minikube]
	I0916 23:57:30.644066  804231 provision.go:177] copyRemoteCerts
	I0916 23:57:30.644118  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:30.644153  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.661612  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:30.757452  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:30.757504  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:30.782942  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:30.782994  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:30.806508  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:30.806562  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:30.829686  804231 provision.go:87] duration metric: took 324.735799ms to configureAuth
	I0916 23:57:30.829709  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:30.829902  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:30.829916  804231 machine.go:96] duration metric: took 788.930334ms to provisionDockerMachine
	I0916 23:57:30.829925  804231 client.go:171] duration metric: took 6.470893656s to LocalClient.Create
	I0916 23:57:30.829958  804231 start.go:167] duration metric: took 6.470963089s to libmachine.API.Create "ha-472903"
	I0916 23:57:30.829971  804231 start.go:293] postStartSetup for "ha-472903-m03" (driver="docker")
	I0916 23:57:30.829982  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:30.830042  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:30.830092  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.847215  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:30.945849  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:30.949055  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:30.949086  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:30.949098  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:30.949107  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:30.949120  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:57:30.949174  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:57:30.949274  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:57:30.949286  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:57:30.949392  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:30.957998  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:30.983779  804231 start.go:296] duration metric: took 153.794843ms for postStartSetup
	I0916 23:57:30.984109  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:31.001367  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:31.001618  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:31.001659  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.019034  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.110814  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:31.115046  804231 start.go:128] duration metric: took 6.757532739s to createHost
	I0916 23:57:31.115072  804231 start.go:83] releasing machines lock for "ha-472903-m03", held for 6.757707303s
	I0916 23:57:31.115154  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:31.133371  804231 out.go:179] * Found network options:
	I0916 23:57:31.134481  804231 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 23:57:31.135570  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135598  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135626  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135644  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:31.135714  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:31.135763  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.135778  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:31.135845  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.152320  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.153909  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.320495  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:31.348141  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:31.348214  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:31.373693  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:31.373720  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:57:31.373748  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:31.373802  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:57:31.385560  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:31.396165  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:57:31.396214  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:57:31.409119  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:57:31.422244  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:57:31.489491  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:57:31.557098  804231 docker.go:234] disabling docker service ...
	I0916 23:57:31.557149  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:57:31.574601  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:57:31.585773  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:57:31.649988  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:57:31.717070  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:31.727904  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:31.743685  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:31.755962  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:31.766072  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:31.766138  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:31.775522  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:31.785914  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:31.795134  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:31.804565  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:31.813319  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:31.822500  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:31.831597  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:31.840887  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:31.848842  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:31.857026  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:31.920521  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:32.022746  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:57:32.022804  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:57:32.026838  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:57:32.026888  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:57:32.030295  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:32.064100  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:57:32.064158  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:32.088276  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:32.114182  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:57:32.115194  804231 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:32.116236  804231 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 23:57:32.117151  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:32.133290  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:32.136901  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:32.147860  804231 mustload.go:65] Loading cluster: ha-472903
	I0916 23:57:32.148060  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:32.148275  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:57:32.164278  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:32.164570  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.4
	I0916 23:57:32.164584  804231 certs.go:194] generating shared ca certs ...
	I0916 23:57:32.164601  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.164751  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:57:32.164800  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:57:32.164814  804231 certs.go:256] generating profile certs ...
	I0916 23:57:32.164911  804231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:57:32.164940  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8
	I0916 23:57:32.164958  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 23:57:32.342596  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 ...
	I0916 23:57:32.342623  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8: {Name:mk455c3f0ae4544ddcdf75c25cbd1b87a24e61a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.342787  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8 ...
	I0916 23:57:32.342799  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8: {Name:mkbd551bf9ae23c129f7e263550d20b4aac5d095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.342871  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:57:32.343007  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:57:32.343136  804231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:57:32.343152  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:32.343165  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:32.343178  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:32.343191  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:32.343204  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:32.343214  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:32.343229  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:32.343247  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:32.343299  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:57:32.343327  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:32.343337  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:32.343357  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:32.343379  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:32.343400  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:57:32.343464  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:32.343501  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.343521  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.343534  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.343588  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:32.360782  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:32.447595  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:32.451217  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:32.464033  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:32.467273  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:32.478860  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:32.482180  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:32.493717  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:32.496761  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:32.507849  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:32.511054  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:32.523733  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:32.526954  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 23:57:32.538314  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:32.561866  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:32.585900  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:32.610048  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:32.634812  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 23:57:32.659163  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:32.682157  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:32.704663  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:32.727856  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:32.752740  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:57:32.775900  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:57:32.798720  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:32.815542  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:32.832241  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:32.848964  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:32.865780  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:32.882614  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 23:57:32.899296  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:32.916516  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:57:32.921611  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:57:32.930917  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.934241  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.934283  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.941354  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:32.950335  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:32.959292  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.962576  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.962623  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.968989  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:32.978331  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:57:32.987188  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.990463  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.990497  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.996813  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:33.005924  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:33.009122  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:33.009183  804231 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0916 23:57:33.009266  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:33.009291  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:33.009319  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:33.021189  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:33.021246  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:33.021293  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:33.029533  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:33.029576  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:33.038861  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 23:57:33.056092  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:33.075506  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:33.093918  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:33.097171  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:33.107668  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:33.167706  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:33.188453  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:33.188671  804231 start.go:317] joinCluster: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:33.188781  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:33.188819  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:33.210165  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:33.351871  804231 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:33.351930  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uj456s.97hymgg3kmg6owuv --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0916 23:57:51.860237  804231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uj456s.97hymgg3kmg6owuv --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (18.508258539s)
	I0916 23:57:51.860308  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:52.080986  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903-m03 minikube.k8s.io/updated_at=2025_09_16T23_57_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=false
	I0916 23:57:52.152525  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-472903-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:52.226560  804231 start.go:319] duration metric: took 19.037884553s to joinCluster
	I0916 23:57:52.226624  804231 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:52.226912  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:52.227744  804231 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:52.228620  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:52.334638  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:52.349036  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:52.349105  804231 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:52.349317  804231 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m03" to be "Ready" ...
	I0916 23:57:54.352346  804231 node_ready.go:49] node "ha-472903-m03" is "Ready"
	I0916 23:57:54.352374  804231 node_ready.go:38] duration metric: took 2.003044453s for node "ha-472903-m03" to be "Ready" ...
	I0916 23:57:54.352389  804231 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:54.352476  804231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:54.365259  804231 api_server.go:72] duration metric: took 2.138606454s to wait for apiserver process to appear ...
	I0916 23:57:54.365280  804231 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:54.365298  804231 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:54.370985  804231 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:54.371831  804231 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:54.371850  804231 api_server.go:131] duration metric: took 6.564025ms to wait for apiserver health ...
	I0916 23:57:54.371858  804231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:54.376785  804231 system_pods.go:59] 27 kube-system pods found
	I0916 23:57:54.376811  804231 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:54.376815  804231 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:54.376818  804231 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:54.376822  804231 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0916 23:57:54.376824  804231 system_pods.go:61] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Pending
	I0916 23:57:54.376830  804231 system_pods.go:61] "kindnet-2dqnn" [f5c4164d-0d88-4b7b-bc52-18a7e211fe98] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2dqnn": pod kindnet-2dqnn is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376833  804231 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:54.376838  804231 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0916 23:57:54.376842  804231 system_pods.go:61] "kindnet-wwdfr" [e86a6e30-712e-4d39-a235-87489d16c0f3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wwdfr": pod kindnet-wwdfr is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376849  804231 system_pods.go:61] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Pending: PodScheduled:SchedulerError (pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) is in the cache, so can't be assumed)
	I0916 23:57:54.376853  804231 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:54.376858  804231 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running
	I0916 23:57:54.376861  804231 system_pods.go:61] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Pending
	I0916 23:57:54.376867  804231 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:54.376870  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0916 23:57:54.376873  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Pending
	I0916 23:57:54.376876  804231 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0916 23:57:54.376881  804231 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:54.376885  804231 system_pods.go:61] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-kn6nb": pod kube-proxy-kn6nb is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376889  804231 system_pods.go:61] "kube-proxy-xhlnz" [1967fed1-7529-46d0-accd-ab74751b47fa] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-xhlnz": pod kube-proxy-xhlnz is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376894  804231 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:54.376897  804231 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0916 23:57:54.376900  804231 system_pods.go:61] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Pending
	I0916 23:57:54.376904  804231 system_pods.go:61] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:54.376907  804231 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0916 23:57:54.376910  804231 system_pods.go:61] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Pending
	I0916 23:57:54.376913  804231 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:54.376918  804231 system_pods.go:74] duration metric: took 5.052009ms to wait for pod list to return data ...
	I0916 23:57:54.376925  804231 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:54.378969  804231 default_sa.go:45] found service account: "default"
	I0916 23:57:54.378989  804231 default_sa.go:55] duration metric: took 2.056584ms for default service account to be created ...
	I0916 23:57:54.378999  804231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:54.383753  804231 system_pods.go:86] 27 kube-system pods found
	I0916 23:57:54.383781  804231 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:54.383790  804231 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:54.383796  804231 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:54.383802  804231 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0916 23:57:54.383812  804231 system_pods.go:89] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Pending
	I0916 23:57:54.383821  804231 system_pods.go:89] "kindnet-2dqnn" [f5c4164d-0d88-4b7b-bc52-18a7e211fe98] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2dqnn": pod kindnet-2dqnn is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383829  804231 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:54.383837  804231 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0916 23:57:54.383842  804231 system_pods.go:89] "kindnet-wwdfr" [e86a6e30-712e-4d39-a235-87489d16c0f3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wwdfr": pod kindnet-wwdfr is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383852  804231 system_pods.go:89] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Pending: PodScheduled:SchedulerError (pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) is in the cache, so can't be assumed)
	I0916 23:57:54.383863  804231 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:54.383874  804231 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running
	I0916 23:57:54.383881  804231 system_pods.go:89] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Pending
	I0916 23:57:54.383887  804231 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:54.383895  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0916 23:57:54.383900  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Pending
	I0916 23:57:54.383908  804231 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0916 23:57:54.383913  804231 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:54.383921  804231 system_pods.go:89] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-kn6nb": pod kube-proxy-kn6nb is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383930  804231 system_pods.go:89] "kube-proxy-xhlnz" [1967fed1-7529-46d0-accd-ab74751b47fa] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-xhlnz": pod kube-proxy-xhlnz is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383939  804231 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:54.383946  804231 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0916 23:57:54.383955  804231 system_pods.go:89] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Pending
	I0916 23:57:54.383962  804231 system_pods.go:89] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:54.383967  804231 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0916 23:57:54.383975  804231 system_pods.go:89] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Pending
	I0916 23:57:54.383980  804231 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:54.383991  804231 system_pods.go:126] duration metric: took 4.985254ms to wait for k8s-apps to be running ...
	I0916 23:57:54.384002  804231 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:54.384056  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:54.395540  804231 system_svc.go:56] duration metric: took 11.532177ms WaitForService to wait for kubelet
	I0916 23:57:54.395557  804231 kubeadm.go:578] duration metric: took 2.168909422s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:54.395577  804231 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:54.398165  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398183  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398194  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398197  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398201  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398205  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398209  804231 node_conditions.go:105] duration metric: took 2.627179ms to run NodePressure ...
	I0916 23:57:54.398219  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:57:54.398248  804231 start.go:255] writing updated cluster config ...
	I0916 23:57:54.398554  804231 ssh_runner.go:195] Run: rm -f paused
	I0916 23:57:54.402187  804231 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:57:54.402686  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:57:54.405144  804231 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c94hz" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.409401  804231 pod_ready.go:94] pod "coredns-66bc5c9577-c94hz" is "Ready"
	I0916 23:57:54.409438  804231 pod_ready.go:86] duration metric: took 4.271645ms for pod "coredns-66bc5c9577-c94hz" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.409448  804231 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qn8m7" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.413536  804231 pod_ready.go:94] pod "coredns-66bc5c9577-qn8m7" is "Ready"
	I0916 23:57:54.413553  804231 pod_ready.go:86] duration metric: took 4.095453ms for pod "coredns-66bc5c9577-qn8m7" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.415699  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.419599  804231 pod_ready.go:94] pod "etcd-ha-472903" is "Ready"
	I0916 23:57:54.419618  804231 pod_ready.go:86] duration metric: took 3.899664ms for pod "etcd-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.419627  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.423363  804231 pod_ready.go:94] pod "etcd-ha-472903-m02" is "Ready"
	I0916 23:57:54.423380  804231 pod_ready.go:86] duration metric: took 3.746731ms for pod "etcd-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.423386  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.603706  804231 request.go:683] "Waited before sending request" delay="180.227617ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-472903-m03"
	I0916 23:57:54.803902  804231 request.go:683] "Waited before sending request" delay="197.349252ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:55.003954  804231 request.go:683] "Waited before sending request" delay="80.206914ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-472903-m03"
	I0916 23:57:55.203362  804231 request.go:683] "Waited before sending request" delay="196.197515ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:55.206052  804231 pod_ready.go:94] pod "etcd-ha-472903-m03" is "Ready"
	I0916 23:57:55.206075  804231 pod_ready.go:86] duration metric: took 782.683771ms for pod "etcd-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.403450  804231 request.go:683] "Waited before sending request" delay="197.254129ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0916 23:57:55.406629  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.604081  804231 request.go:683] "Waited before sending request" delay="197.327981ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903"
	I0916 23:57:55.803277  804231 request.go:683] "Waited before sending request" delay="196.28238ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:55.806023  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903" is "Ready"
	I0916 23:57:55.806053  804231 pod_ready.go:86] duration metric: took 399.400731ms for pod "kube-apiserver-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.806064  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.003360  804231 request.go:683] "Waited before sending request" delay="197.181089ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903-m02"
	I0916 23:57:56.203591  804231 request.go:683] "Waited before sending request" delay="197.334062ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:56.206593  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903-m02" is "Ready"
	I0916 23:57:56.206619  804231 pod_ready.go:86] duration metric: took 400.548564ms for pod "kube-apiserver-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.206627  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.404053  804231 request.go:683] "Waited before sending request" delay="197.330591ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903-m03"
	I0916 23:57:56.603366  804231 request.go:683] "Waited before sending request" delay="196.334008ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:56.606216  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903-m03" is "Ready"
	I0916 23:57:56.606240  804231 pod_ready.go:86] duration metric: took 399.60823ms for pod "kube-apiserver-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.803696  804231 request.go:683] "Waited before sending request" delay="197.341894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0916 23:57:56.806878  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.003237  804231 request.go:683] "Waited before sending request" delay="196.261492ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903"
	I0916 23:57:57.203189  804231 request.go:683] "Waited before sending request" delay="197.16206ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:57.205847  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903" is "Ready"
	I0916 23:57:57.205870  804231 pod_ready.go:86] duration metric: took 398.97003ms for pod "kube-controller-manager-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.205878  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.403223  804231 request.go:683] "Waited before sending request" delay="197.233762ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903-m02"
	I0916 23:57:57.603503  804231 request.go:683] "Waited before sending request" delay="197.308924ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:57.606309  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903-m02" is "Ready"
	I0916 23:57:57.606331  804231 pod_ready.go:86] duration metric: took 400.447455ms for pod "kube-controller-manager-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.606339  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.803572  804231 request.go:683] "Waited before sending request" delay="197.156861ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903-m03"
	I0916 23:57:58.003564  804231 request.go:683] "Waited before sending request" delay="197.308739ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:58.006495  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903-m03" is "Ready"
	I0916 23:57:58.006527  804231 pod_ready.go:86] duration metric: took 400.177209ms for pod "kube-controller-manager-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.203971  804231 request.go:683] "Waited before sending request" delay="197.330656ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0916 23:57:58.207087  804231 pod_ready.go:83] waiting for pod "kube-proxy-58lkb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.403484  804231 request.go:683] "Waited before sending request" delay="196.298118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-58lkb"
	I0916 23:57:58.603727  804231 request.go:683] "Waited before sending request" delay="197.238459ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:58.606561  804231 pod_ready.go:94] pod "kube-proxy-58lkb" is "Ready"
	I0916 23:57:58.606586  804231 pod_ready.go:86] duration metric: took 399.476011ms for pod "kube-proxy-58lkb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.606593  804231 pod_ready.go:83] waiting for pod "kube-proxy-d4m8f" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.804003  804231 request.go:683] "Waited before sending request" delay="197.323847ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d4m8f"
	I0916 23:57:59.003937  804231 request.go:683] "Waited before sending request" delay="197.340178ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:59.006899  804231 pod_ready.go:94] pod "kube-proxy-d4m8f" is "Ready"
	I0916 23:57:59.006927  804231 pod_ready.go:86] duration metric: took 400.327971ms for pod "kube-proxy-d4m8f" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:59.006938  804231 pod_ready.go:83] waiting for pod "kube-proxy-kn6nb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:59.203366  804231 request.go:683] "Waited before sending request" delay="196.341882ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kn6nb"
	I0916 23:57:59.403608  804231 request.go:683] "Waited before sending request" delay="197.193431ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:59.604047  804231 request.go:683] "Waited before sending request" delay="96.244025ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kn6nb"
	I0916 23:57:59.803112  804231 request.go:683] "Waited before sending request" delay="196.282766ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:58:00.203120  804231 request.go:683] "Waited before sending request" delay="192.276334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:58:00.603459  804231 request.go:683] "Waited before sending request" delay="93.218157ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	W0916 23:58:01.014543  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:03.512871  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:06.012965  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:08.512763  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:11.012966  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:13.013166  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:15.512655  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:18.012615  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:20.513188  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:23.012908  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:25.013240  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:27.512733  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:30.012142  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:32.012503  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:34.013070  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:36.512643  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	I0916 23:58:37.014670  804231 pod_ready.go:94] pod "kube-proxy-kn6nb" is "Ready"
	I0916 23:58:37.014697  804231 pod_ready.go:86] duration metric: took 38.007753603s for pod "kube-proxy-kn6nb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.017732  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.022228  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903" is "Ready"
	I0916 23:58:37.022246  804231 pod_ready.go:86] duration metric: took 4.488553ms for pod "kube-scheduler-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.022253  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.026173  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903-m02" is "Ready"
	I0916 23:58:37.026191  804231 pod_ready.go:86] duration metric: took 3.932068ms for pod "kube-scheduler-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.026198  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.030029  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903-m03" is "Ready"
	I0916 23:58:37.030046  804231 pod_ready.go:86] duration metric: took 3.843487ms for pod "kube-scheduler-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.030054  804231 pod_ready.go:40] duration metric: took 42.627839542s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:58:37.073472  804231 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0916 23:58:37.074923  804231 out.go:179] * Done! kubectl is now configured to use "ha-472903" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0a41d8b587e02       8c811b4aec35f       12 minutes ago      Running             busybox                   0                   a2422ee3e6e6d       busybox-7b57f96db7-6hrm6
	f33de265effb1       6e38f40d628db       13 minutes ago      Running             storage-provisioner       1                   1c0713f862ea0       storage-provisioner
	9f103b05d2d6f       52546a367cc9e       13 minutes ago      Running             coredns                   0                   9579263342827       coredns-66bc5c9577-c94hz
	3b457407f10e3       52546a367cc9e       14 minutes ago      Running             coredns                   0                   290cfb537788e       coredns-66bc5c9577-qn8m7
	cc69d2451cb65       409467f978b4a       14 minutes ago      Running             kindnet-cni               0                   3e17d6ae9b2a6       kindnet-lh7dv
	f4767b6363ce9       6e38f40d628db       14 minutes ago      Exited              storage-provisioner       0                   1c0713f862ea0       storage-provisioner
	92dd4d116eb03       df0860106674d       14 minutes ago      Running             kube-proxy                0                   8c0ecd5301326       kube-proxy-d4m8f
	3cb75495f7a54       765655ea60781       14 minutes ago      Running             kube-vip                  0                   4c425da29992d       kube-vip-ha-472903
	bba28cace6502       46169d968e920       14 minutes ago      Running             kube-scheduler            0                   f18dd7697c60f       kube-scheduler-ha-472903
	087290a41f59c       a0af72f2ec6d6       14 minutes ago      Running             kube-controller-manager   0                   0760ebe1d2a56       kube-controller-manager-ha-472903
	0aba62132d764       90550c43ad2bc       14 minutes ago      Running             kube-apiserver            0                   8ad1fa8bc0267       kube-apiserver-ha-472903
	23c0af0bdbe95       5f1f5298c888d       14 minutes ago      Running             etcd                      0                   b01a62742caec       etcd-ha-472903
	
	
	==> containerd <==
	Sep 16 23:57:20 ha-472903 containerd[765]: time="2025-09-16T23:57:20.857383931Z" level=info msg="StartContainer for \"9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315\""
	Sep 16 23:57:20 ha-472903 containerd[765]: time="2025-09-16T23:57:20.915209442Z" level=info msg="StartContainer for \"9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315\" returns successfully"
	Sep 16 23:57:26 ha-472903 containerd[765]: time="2025-09-16T23:57:26.847849669Z" level=info msg="received exit event container_id:\"f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8\"  id:\"f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8\"  pid:2188  exit_status:1  exited_at:{seconds:1758067046  nanos:847300745}"
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084468964Z" level=info msg="shim disconnected" id=f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8 namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084514637Z" level=warning msg="cleaning up after shim disconnected" id=f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8 namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084528446Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.861023305Z" level=info msg="CreateContainer within sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.875038922Z" level=info msg="CreateContainer within sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\""
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.875884762Z" level=info msg="StartContainer for \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\""
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.929708067Z" level=info msg="StartContainer for \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\" returns successfully"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.362974621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-6hrm6,Uid:bd03bad4-af1e-42d0-81fb-6fcaeaa8775e,Namespace:default,Attempt:0,}"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.455106923Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.455480779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-6hrm6,Uid:bd03bad4-af1e-42d0-81fb-6fcaeaa8775e,Namespace:default,Attempt:0,} returns sandbox id \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\""
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.457290181Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.440332779Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.440968214Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.442025332Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.443719507Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.444221405Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 1.986887608s"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.444254598Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.447875079Z" level=info msg="CreateContainer within sandbox \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.457018566Z" level=info msg="CreateContainer within sandbox \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.457508138Z" level=info msg="StartContainer for \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.510633374Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.512731136Z" level=info msg="StartContainer for \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\" returns successfully"
	
	
	==> coredns [3b457407f10e357ce33da7fa3fb4333f8312f0d3e3570cf8528cdcac8f5a1d0f] <==
	[INFO] 10.244.1.2:57899 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.012540337s
	[INFO] 10.244.1.2:54323 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.008980197s
	[INFO] 10.244.1.2:53799 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.009949044s
	[INFO] 10.244.0.4:39485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157098s
	[INFO] 10.244.0.4:57871 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000750185s
	[INFO] 10.244.0.4:53410 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000089028s
	[INFO] 10.244.1.2:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150317s
	[INFO] 10.244.1.2:59346 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028128363s
	[INFO] 10.244.1.2:43091 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01004668s
	[INFO] 10.244.1.2:37227 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000191819s
	[INFO] 10.244.1.2:40079 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125376s
	[INFO] 10.244.0.4:38168 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181114s
	[INFO] 10.244.0.4:60067 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000087147s
	[INFO] 10.244.0.4:47611 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122939s
	[INFO] 10.244.0.4:37626 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121195s
	[INFO] 10.244.1.2:42817 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159509s
	[INFO] 10.244.1.2:33910 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186538s
	[INFO] 10.244.1.2:37929 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109836s
	[INFO] 10.244.0.4:50698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212263s
	[INFO] 10.244.0.4:33166 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100167s
	[INFO] 10.244.1.2:50377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157558s
	[INFO] 10.244.1.2:39491 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132025s
	[INFO] 10.244.1.2:50075 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112028s
	[INFO] 10.244.0.4:58743 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149175s
	[INFO] 10.244.0.4:52796 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114946s
	
	
	==> coredns [9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45239 - 14115 "HINFO IN 5883645869461503498.3950535614037284853. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058516241s
	[INFO] 10.244.1.2:55352 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003252862s
	[INFO] 10.244.0.4:33650 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001640931s
	[INFO] 10.244.0.4:50077 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000621363s
	[INFO] 10.244.1.2:48439 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189187s
	[INFO] 10.244.1.2:39582 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151327s
	[INFO] 10.244.1.2:59539 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140715s
	[INFO] 10.244.0.4:42999 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177514s
	[INFO] 10.244.0.4:36769 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010694753s
	[INFO] 10.244.0.4:53074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158932s
	[INFO] 10.244.0.4:57223 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012213s
	[INFO] 10.244.1.2:50810 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176678s
	[INFO] 10.244.0.4:58045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142445s
	[INFO] 10.244.0.4:39777 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123555s
	[INFO] 10.244.1.2:59022 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148853s
	[INFO] 10.244.0.4:45136 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001657s
	[INFO] 10.244.0.4:37711 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134332s
	
	
	==> describe nodes <==
	Name:               ha-472903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_56_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:56:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:11:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-472903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac22e2ab5b0349cdb9474983aa23278e
	  System UUID:                695af4c7-28fb-4299-9454-75db3262ca2c
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6hrm6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-c94hz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 coredns-66bc5c9577-qn8m7             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 etcd-ha-472903                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kindnet-lh7dv                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-472903             250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-472903    200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-d4m8f                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-472903             100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-472903                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	
	
	Name:               ha-472903-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:11:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-472903-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 4094672df3d84509ae4c88c54f7f5e93
	  System UUID:                85df9db8-f21a-4038-9f8c-4cc1d81dc0d5
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-4jfjt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-472903-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-q7c7s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-472903-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-472903-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-58lkb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-472903-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-472903-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        13m   kube-proxy       
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	
	
	Name:               ha-472903-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:11:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-472903-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 9964c713c65f4333be8a877aab744040
	  System UUID:                7eb7f2ee-a32d-4876-a4ad-58f745b9c377
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-mknzs                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-472903-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-x6twd                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-472903-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-472903-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-kn6nb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-472903-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-472903-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 e8 75 4b 01 57 08 06
	[  +0.025562] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[ +13.150028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 5c f0 26 cd ba 08 06
	[  +0.000341] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 20 90 fb f5 d8 08 06
	[ +28.639349] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 26 63 8d db 90 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[  +0.836892] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 cc 9b 52 38 94 08 06
	[  +0.080327] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	[Sep16 23:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[ +20.325550] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 39 4b 41 df 63 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[  +8.925776] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e cd c1 f7 dc c8 08 06
	[  +0.000373] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	
	
	==> etcd [23c0af0bdbe9526d53769461ed9f80d8c743b02e625b65cce39c888f5e7d4b4e] <==
	{"level":"info","ts":"2025-09-16T23:57:38.321619Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"ab9d0391dce79465","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-16T23:57:38.321647Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.321659Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.321995Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.324746Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"ab9d0391dce79465","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-09-16T23:57:38.324782Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.324796Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-16T23:57:38.539376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:45372","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:57:38.542781Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(4226730353838347643 12366044076840555621 12593026477526642892)"}
	{"level":"info","ts":"2025-09-16T23:57:38.542928Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.542988Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:40.311787Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"3aa85cdcd5e5557b","bytes":876533,"size":"876 kB","took":"30.009467109s"}
	{"level":"info","ts":"2025-09-16T23:57:47.400606Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:57:51.874557Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:06.103123Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:08.299219Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"ab9d0391dce79465","bytes":1356737,"size":"1.4 MB","took":"30.011071692s"}
	{"level":"info","ts":"2025-09-17T00:06:46.502551Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1554}
	{"level":"info","ts":"2025-09-17T00:06:46.523688Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1554,"took":"20.616779ms","hash":4277915431,"current-db-size-bytes":3936256,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2105344,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-09-17T00:06:46.523839Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4277915431,"revision":1554,"compact-revision":-1}
	{"level":"info","ts":"2025-09-17T00:10:51.037991Z","caller":"traceutil/trace.go:172","msg":"trace[1596502853] transaction","detail":"{read_only:false; response_revision:2892; number_of_response:1; }","duration":"106.292545ms","start":"2025-09-17T00:10:50.931676Z","end":"2025-09-17T00:10:51.037969Z","steps":["trace[1596502853] 'process raft request'  (duration: 106.163029ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:10:52.331973Z","caller":"traceutil/trace.go:172","msg":"trace[583569919] transaction","detail":"{read_only:false; response_revision:2894; number_of_response:1; }","duration":"112.232554ms","start":"2025-09-17T00:10:52.219723Z","end":"2025-09-17T00:10:52.331956Z","steps":["trace[583569919] 'process raft request'  (duration: 112.100203ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:11:09.266390Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.274935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:602"}
	{"level":"info","ts":"2025-09-17T00:11:09.266493Z","caller":"traceutil/trace.go:172","msg":"trace[316861325] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2934; }","duration":"165.393135ms","start":"2025-09-17T00:11:09.101086Z","end":"2025-09-17T00:11:09.266479Z","steps":["trace[316861325] 'range keys from in-memory index tree'  (duration: 164.766592ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:11:09.393171Z","caller":"traceutil/trace.go:172","msg":"trace[484529161] transaction","detail":"{read_only:false; response_revision:2935; number_of_response:1; }","duration":"123.717206ms","start":"2025-09-17T00:11:09.269439Z","end":"2025-09-17T00:11:09.393156Z","steps":["trace[484529161] 'process raft request'  (duration: 123.599826ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:11:09.634612Z","caller":"traceutil/trace.go:172","msg":"trace[1840342263] transaction","detail":"{read_only:false; response_revision:2936; number_of_response:1; }","duration":"177.817508ms","start":"2025-09-17T00:11:09.456780Z","end":"2025-09-17T00:11:09.634597Z","steps":["trace[1840342263] 'process raft request'  (duration: 177.726281ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:11:17 up  2:53,  0 users,  load average: 0.84, 0.49, 0.84
	Linux ha-472903 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [cc69d2451cb65860b5bc78e027be2fc1cb0f9fa6542b4abe3bc1ff1c90a8fe60] <==
	I0917 00:10:27.511671       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:10:37.506147       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:37.506186       1 main.go:301] handling current node
	I0917 00:10:37.506204       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:10:37.506209       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:10:37.506448       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:10:37.506459       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:10:47.508686       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:47.508735       1 main.go:301] handling current node
	I0917 00:10:47.508758       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:10:47.508766       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:10:47.509017       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:10:47.509093       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:10:57.504295       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:10:57.504328       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:10:57.504535       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:57.504555       1 main.go:301] handling current node
	I0917 00:10:57.504571       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:10:57.504577       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:11:07.510900       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:11:07.510941       1 main.go:301] handling current node
	I0917 00:11:07.510955       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:11:07.510960       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:11:07.512207       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:11:07.512233       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0aba62132d764965d8e1a80a4a6345bb7e34892b23143da4a7af3450cd465d6c] <==
	I0917 00:06:06.800617       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:06:32.710262       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:06:47.441344       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:07:34.732036       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:07:42.022448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:08:46.236959       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:08:51.159386       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:52.603432       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:53.014406       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0917 00:10:41.954540       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37534: use of closed network connection
	E0917 00:10:42.122977       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37556: use of closed network connection
	E0917 00:10:42.250606       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37572: use of closed network connection
	E0917 00:10:42.442469       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37584: use of closed network connection
	E0917 00:10:42.605380       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37602: use of closed network connection
	E0917 00:10:42.730284       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37612: use of closed network connection
	E0917 00:10:42.884291       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37626: use of closed network connection
	E0917 00:10:43.036952       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37644: use of closed network connection
	E0917 00:10:43.161098       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37658: use of closed network connection
	E0917 00:10:45.408563       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37722: use of closed network connection
	E0917 00:10:45.568465       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37752: use of closed network connection
	E0917 00:10:45.727267       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37770: use of closed network connection
	E0917 00:10:45.883182       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37790: use of closed network connection
	E0917 00:10:46.004301       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37814: use of closed network connection
	I0917 00:10:57.282648       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:10:57.462257       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [087290a41f59caa4f9bc89759bcec6cf90f47c8a2ab83b7c671a8fff35641df9] <==
	I0916 23:56:54.728442       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0916 23:56:54.728466       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:56:54.728485       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0916 23:56:54.728644       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0916 23:56:54.728665       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0916 23:56:54.728648       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0916 23:56:54.728914       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0916 23:56:54.730175       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0916 23:56:54.730201       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0916 23:56:54.732432       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:56:54.733452       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:56:54.735655       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:56:54.735714       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:56:54.735760       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:56:54.735767       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:56:54.735772       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:56:54.740680       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903" podCIDRs=["10.244.0.0/24"]
	I0916 23:56:54.749950       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:22.933124       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m02\" does not exist"
	I0916 23:57:22.943785       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:24.681339       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m02"
	I0916 23:57:51.749676       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m03\" does not exist"
	I0916 23:57:51.772476       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m03" podCIDRs=["10.244.2.0/24"]
	E0916 23:57:51.829801       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"3f5da9fc-6769-4ca8-a715-edeace44c646\", ResourceVersion:\"594\", Generation:1, CreationTimestamp:time.Date(2025, time.September, 16, 23, 56, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00222d0e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"
\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSourc
e)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0021ed7c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdcf8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtu
alDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.34.0\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00144a7e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Re
sourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Life
cycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0019549c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001900b18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ba1200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", Tole
rationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e14570)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001900b70)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailab
le:2, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:57:54.685322       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m03"
	
	
	==> kube-proxy [92dd4d116eb0387dded82fb32d35690ec2d00e3f5e7ac81bf7aea0c6814edd5e] <==
	I0916 23:56:56.831012       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:56:56.891635       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:56:56.991820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:56:56.991862       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:56:56.991952       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:56:57.015955       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:56:57.016001       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:56:57.021120       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:56:57.021457       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:56:57.021499       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:56:57.024872       1 config.go:200] "Starting service config controller"
	I0916 23:56:57.024892       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:56:57.024900       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:56:57.024909       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:56:57.024890       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:56:57.024917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:56:57.024937       1 config.go:309] "Starting node config controller"
	I0916 23:56:57.024942       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:56:57.125608       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:56:57.125691       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0916 23:56:57.125856       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:56:57.125902       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [bba28cace6502de93aa43db4fb51671581c5074990dea721d98d36d839734a67] <==
	E0916 23:56:48.619869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:56:48.649766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:56:48.673092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I0916 23:56:49.170967       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 23:57:51.780040       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:57:51.780142       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	E0916 23:57:51.780183       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	I0916 23:57:51.782132       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:58:37.948695       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	E0916 23:58:37.948846       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 565a634f-ab41-4776-ba5d-63a601bfec48(default/busybox-7b57f96db7-x6xc9) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	E0916 23:58:37.948875       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	I0916 23:58:37.950251       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	I0916 23:58:37.966099       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="47b06c15-c007-4c50-a248-5411a0f4b6a7" pod="default/busybox-7b57f96db7-4jfjt" assumedNode="ha-472903-m02" currentNode="ha-472903"
	E0916 23:58:37.968241       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903"
	E0916 23:58:37.968351       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 47b06c15-c007-4c50-a248-5411a0f4b6a7(default/busybox-7b57f96db7-4jfjt) was assumed on ha-472903 but assigned to ha-472903-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	E0916 23:58:37.968376       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	I0916 23:58:37.969472       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903-m02"
	E0916 23:58:38.002469       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-wp95z" node="ha-472903"
	E0916 23:58:38.002779       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:38.046394       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-xnrsc\" not found" pod="default/busybox-7b57f96db7-xnrsc"
	E0916 23:58:38.046880       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-wp95z\" not found" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:40.050124       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	E0916 23:58:40.050213       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod bd03bad4-af1e-42d0-81fb-6fcaeaa8775e(default/busybox-7b57f96db7-6hrm6) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	E0916 23:58:40.050248       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	I0916 23:58:40.051853       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	
	
	==> kubelet <==
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.235025    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62 podName:cc7a8d10-408f-4655-ac70-54b4af22d9eb nodeName:}" failed. No retries permitted until 2025-09-16 23:58:38.735007966 +0000 UTC m=+109.066439678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hrb62" (UniqueName: "kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62") pod "busybox-7b57f96db7-5pwbb" (UID: "cc7a8d10-408f-4655-ac70-54b4af22d9eb") : failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737179    1676 projected.go:196] Error preparing data for projected volume kube-api-access-xrpwc for pod default/busybox-7b57f96db7-xj7ks: failed to fetch token: pod "busybox-7b57f96db7-xj7ks" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737266    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc podName:cac915f6-7630-4320-b6d2-fd18f3c19a17 nodeName:}" failed. No retries permitted until 2025-09-16 23:58:39.737245356 +0000 UTC m=+110.068677057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xrpwc" (UniqueName: "kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc") pod "busybox-7b57f96db7-xj7ks" (UID: "cac915f6-7630-4320-b6d2-fd18f3c19a17") : failed to fetch token: pod "busybox-7b57f96db7-xj7ks" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737179    1676 projected.go:196] Error preparing data for projected volume kube-api-access-hrb62 for pod default/busybox-7b57f96db7-5pwbb: failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737371    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62 podName:cc7a8d10-408f-4655-ac70-54b4af22d9eb nodeName:}" failed. No retries permitted until 2025-09-16 23:58:39.737351933 +0000 UTC m=+110.068783647 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hrb62" (UniqueName: "kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62") pod "busybox-7b57f96db7-5pwbb" (UID: "cc7a8d10-408f-4655-ac70-54b4af22d9eb") : failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.027158    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.028111    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.039445    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.042381    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138755    1676 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9njqf\" (UniqueName: \"kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf\") pod \"59b9a23c-498d-4802-9790-70931c4a2c06\" (UID: \"59b9a23c-498d-4802-9790-70931c4a2c06\") "
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138821    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hrb62\" (UniqueName: \"kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138836    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xrpwc\" (UniqueName: \"kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.140952    1676 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf" (OuterVolumeSpecName: "kube-api-access-9njqf") pod "59b9a23c-498d-4802-9790-70931c4a2c06" (UID: "59b9a23c-498d-4802-9790-70931c4a2c06"). InnerVolumeSpecName "kube-api-access-9njqf". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.239025    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9njqf\" (UniqueName: \"kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.752137    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.753199    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.754268    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" path="/var/lib/kubelet/pods/cac915f6-7630-4320-b6d2-fd18f3c19a17/volumes"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.754475    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" path="/var/lib/kubelet/pods/cc7a8d10-408f-4655-ac70-54b4af22d9eb/volumes"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.056772    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.057611    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.059208    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.060512    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: I0916 23:58:40.145054    1676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjkrp\" (UniqueName: \"kubernetes.io/projected/bd03bad4-af1e-42d0-81fb-6fcaeaa8775e-kube-api-access-pjkrp\") pod \"busybox-7b57f96db7-6hrm6\" (UID: \"bd03bad4-af1e-42d0-81fb-6fcaeaa8775e\") " pod="default/busybox-7b57f96db7-6hrm6"
	Sep 16 23:58:41 ha-472903 kubelet[1676]: I0916 23:58:41.754549    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59b9a23c-498d-4802-9790-70931c4a2c06" path="/var/lib/kubelet/pods/59b9a23c-498d-4802-9790-70931c4a2c06/volumes"
	Sep 16 23:58:43 ha-472903 kubelet[1676]: I0916 23:58:43.049200    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7b57f96db7-6hrm6" podStartSLOduration=3.061025393 podStartE2EDuration="5.049179166s" podCreationTimestamp="2025-09-16 23:58:38 +0000 UTC" firstStartedPulling="2025-09-16 23:58:40.45690156 +0000 UTC m=+110.788333264" lastFinishedPulling="2025-09-16 23:58:42.445055322 +0000 UTC m=+112.776487037" observedRunningTime="2025-09-16 23:58:43.049092106 +0000 UTC m=+113.380523828" watchObservedRunningTime="2025-09-16 23:58:43.049179166 +0000 UTC m=+113.380610888"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-472903 -n ha-472903
helpers_test.go:269: (dbg) Run:  kubectl --context ha-472903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-mknzs
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-472903 describe pod busybox-7b57f96db7-mknzs
helpers_test.go:290: (dbg) kubectl --context ha-472903 describe pod busybox-7b57f96db7-mknzs:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-mknzs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ha-472903-m03/192.168.49.4
	Start Time:       Tue, 16 Sep 2025 23:58:37 +0000
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmz92 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gmz92:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                  Age                   From               Message
	  ----     ------                  ----                  ----               -------
	  Warning  FailedScheduling        12m                   default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-mknzs": pod busybox-7b57f96db7-mknzs is already assigned to node "ha-472903-m03"
	  Warning  FailedScheduling        12m                   default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-mknzs": pod busybox-7b57f96db7-mknzs is already assigned to node "ha-472903-m03"
	  Normal   Scheduled               12m                   default-scheduler  Successfully assigned default/busybox-7b57f96db7-mknzs to ha-472903-m03
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "72439adc47052c2da00cee62587d780275cf6c2423dee9831567464d4725ee9d": failed to find network info for sandbox "72439adc47052c2da00cee62587d780275cf6c2423dee9831567464d4725ee9d"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "24ab8b6bd2f38653d2326c375fc81ebf17317e36885547c7b42c011bb95889ed": failed to find network info for sandbox "24ab8b6bd2f38653d2326c375fc81ebf17317e36885547c7b42c011bb95889ed"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "300fece4c100bc3e68a19e1fa6f46c8a378753727caaaeb1533dab71f234be58": failed to find network info for sandbox "300fece4c100bc3e68a19e1fa6f46c8a378753727caaaeb1533dab71f234be58"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e49a14b4de5e24fa450a43c124b2916ad7028d35cbc3b0f74595e68ee161d1d0": failed to find network info for sandbox "e49a14b4de5e24fa450a43c124b2916ad7028d35cbc3b0f74595e68ee161d1d0"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "efa290ca498f7c70ae29d8d97709edda97bc6b062aac05a3ef6d6a83fbd42797": failed to find network info for sandbox "efa290ca498f7c70ae29d8d97709edda97bc6b062aac05a3ef6d6a83fbd42797"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d5851ce1270b1c8994400ecd7bdabadaf895488957ffb5173dcd7e289db1de6c": failed to find network info for sandbox "d5851ce1270b1c8994400ecd7bdabadaf895488957ffb5173dcd7e289db1de6c"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "11aaa894ae434b08da8122c8f3445d03b4c1e54dfb071596f63a0e4654f49f10": failed to find network info for sandbox "11aaa894ae434b08da8122c8f3445d03b4c1e54dfb071596f63a0e4654f49f10"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c8126e80126ff891a4935c60cfec55753f6bb51d789c0eb46098b72267c7d53c": failed to find network info for sandbox "c8126e80126ff891a4935c60cfec55753f6bb51d789c0eb46098b72267c7d53c"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1389a2f92f350a6f495c76f80031300b6442a6a0cc67abd4b045ff9150b3fc3a": failed to find network info for sandbox "1389a2f92f350a6f495c76f80031300b6442a6a0cc67abd4b045ff9150b3fc3a"
	  Warning  FailedCreatePodSandBox  2m35s (x38 over 10m)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c3a9afe91461f3ea405980387ac5fab85785c7cf3f180d2b0f894e1df94ca62d": failed to find network info for sandbox "c3a9afe91461f3ea405980387ac5fab85785c7cf3f180d2b0f894e1df94ca62d"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (29.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 status --output json --alsologtostderr -v 5: exit status 7 (688.248389ms)

                                                
                                                
-- stdout --
	[{"Name":"ha-472903","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-472903-m02","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-472903-m03","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-472903-m04","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:11:18.591672  823994 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:11:18.591755  823994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:11:18.591763  823994 out.go:374] Setting ErrFile to fd 2...
	I0917 00:11:18.591767  823994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:11:18.591935  823994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:11:18.592139  823994 out.go:368] Setting JSON to true
	I0917 00:11:18.592160  823994 mustload.go:65] Loading cluster: ha-472903
	I0917 00:11:18.592209  823994 notify.go:220] Checking for updates...
	I0917 00:11:18.592554  823994 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:11:18.592579  823994 status.go:174] checking status of ha-472903 ...
	I0917 00:11:18.593006  823994 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:11:18.611885  823994 status.go:371] ha-472903 host status = "Running" (err=<nil>)
	I0917 00:11:18.611930  823994 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:11:18.612330  823994 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:11:18.629203  823994 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:11:18.629492  823994 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:11:18.629551  823994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:11:18.646091  823994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:11:18.737275  823994 ssh_runner.go:195] Run: systemctl --version
	I0917 00:11:18.741623  823994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:11:18.753245  823994 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:11:18.807027  823994 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:11:18.797148218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:11:18.807619  823994 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:11:18.807653  823994 api_server.go:166] Checking apiserver status ...
	I0917 00:11:18.807696  823994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:11:18.819638  823994 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup
	W0917 00:11:18.829711  823994 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:11:18.829749  823994 ssh_runner.go:195] Run: ls
	I0917 00:11:18.833458  823994 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:11:18.837453  823994 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:11:18.837477  823994 status.go:463] ha-472903 apiserver status = Running (err=<nil>)
	I0917 00:11:18.837490  823994 status.go:176] ha-472903 status: &{Name:ha-472903 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:11:18.837510  823994 status.go:174] checking status of ha-472903-m02 ...
	I0917 00:11:18.837729  823994 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:11:18.854155  823994 status.go:371] ha-472903-m02 host status = "Running" (err=<nil>)
	I0917 00:11:18.854172  823994 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:11:18.854407  823994 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:11:18.870958  823994 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:11:18.871277  823994 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:11:18.871327  823994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:11:18.887183  823994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:11:18.979906  823994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:11:18.991665  823994 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:11:18.991693  823994 api_server.go:166] Checking apiserver status ...
	I0917 00:11:18.991722  823994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:11:19.002451  823994 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1453/cgroup
	W0917 00:11:19.011838  823994 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1453/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:11:19.011883  823994 ssh_runner.go:195] Run: ls
	I0917 00:11:19.015203  823994 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:11:19.019975  823994 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:11:19.020005  823994 status.go:463] ha-472903-m02 apiserver status = Running (err=<nil>)
	I0917 00:11:19.020015  823994 status.go:176] ha-472903-m02 status: &{Name:ha-472903-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:11:19.020044  823994 status.go:174] checking status of ha-472903-m03 ...
	I0917 00:11:19.020350  823994 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:11:19.038339  823994 status.go:371] ha-472903-m03 host status = "Running" (err=<nil>)
	I0917 00:11:19.038358  823994 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:11:19.038672  823994 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:11:19.055996  823994 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:11:19.056262  823994 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:11:19.056305  823994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:11:19.073270  823994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:11:19.168128  823994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:11:19.181024  823994 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:11:19.181058  823994 api_server.go:166] Checking apiserver status ...
	I0917 00:11:19.181111  823994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:11:19.192558  823994 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	W0917 00:11:19.203530  823994 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:11:19.203609  823994 ssh_runner.go:195] Run: ls
	I0917 00:11:19.207832  823994 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:11:19.211965  823994 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:11:19.211993  823994 status.go:463] ha-472903-m03 apiserver status = Running (err=<nil>)
	I0917 00:11:19.212016  823994 status.go:176] ha-472903-m03 status: &{Name:ha-472903-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:11:19.212035  823994 status.go:174] checking status of ha-472903-m04 ...
	I0917 00:11:19.212335  823994 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:11:19.231021  823994 status.go:371] ha-472903-m04 host status = "Stopped" (err=<nil>)
	I0917 00:11:19.231046  823994 status.go:384] host is not running, skipping remaining checks
	I0917 00:11:19.231055  823994 status.go:176] ha-472903-m04 status: &{Name:ha-472903-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp testdata/cp-test.txt ha-472903:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp ha-472903:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970888916/001/cp-test_ha-472903.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp ha-472903:/home/docker/cp-test.txt ha-472903-m02:/home/docker/cp-test_ha-472903_ha-472903-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m02 "sudo cat /home/docker/cp-test_ha-472903_ha-472903-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp ha-472903:/home/docker/cp-test.txt ha-472903-m03:/home/docker/cp-test_ha-472903_ha-472903-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m03 "sudo cat /home/docker/cp-test_ha-472903_ha-472903-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp ha-472903:/home/docker/cp-test.txt ha-472903-m04:/home/docker/cp-test_ha-472903_ha-472903-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 cp ha-472903:/home/docker/cp-test.txt ha-472903-m04:/home/docker/cp-test_ha-472903_ha-472903-m04.txt: exit status 1 (143.399492ms)

                                                
                                                
** stderr ** 
	getting host: "ha-472903-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 cp ha-472903:/home/docker/cp-test.txt ha-472903-m04:/home/docker/cp-test_ha-472903_ha-472903-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 "sudo cat /home/docker/cp-test_ha-472903_ha-472903-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 "sudo cat /home/docker/cp-test_ha-472903_ha-472903-m04.txt": exit status 1 (145.765655ms)

                                                
                                                
** stderr ** 
	ssh: "ha-472903-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 \"sudo cat /home/docker/cp-test_ha-472903_ha-472903-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp testdata/cp-test.txt ha-472903-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970888916/001/cp-test_ha-472903-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m02:/home/docker/cp-test.txt ha-472903:/home/docker/cp-test_ha-472903-m02_ha-472903.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903 "sudo cat /home/docker/cp-test_ha-472903-m02_ha-472903.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m02:/home/docker/cp-test.txt ha-472903-m03:/home/docker/cp-test_ha-472903-m02_ha-472903-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m03 "sudo cat /home/docker/cp-test_ha-472903-m02_ha-472903-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m02:/home/docker/cp-test.txt ha-472903-m04:/home/docker/cp-test_ha-472903-m02_ha-472903-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m02:/home/docker/cp-test.txt ha-472903-m04:/home/docker/cp-test_ha-472903-m02_ha-472903-m04.txt: exit status 1 (144.292985ms)

                                                
                                                
** stderr ** 
	getting host: "ha-472903-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m02:/home/docker/cp-test.txt ha-472903-m04:/home/docker/cp-test_ha-472903-m02_ha-472903-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 "sudo cat /home/docker/cp-test_ha-472903-m02_ha-472903-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 "sudo cat /home/docker/cp-test_ha-472903-m02_ha-472903-m04.txt": exit status 1 (145.428417ms)

                                                
                                                
** stderr ** 
	ssh: "ha-472903-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 \"sudo cat /home/docker/cp-test_ha-472903-m02_ha-472903-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp testdata/cp-test.txt ha-472903-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970888916/001/cp-test_ha-472903-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903:/home/docker/cp-test_ha-472903-m03_ha-472903.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903 "sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903-m02:/home/docker/cp-test_ha-472903-m03_ha-472903-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m02 "sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903-m04:/home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903-m04:/home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt: exit status 1 (141.766443ms)

                                                
                                                
** stderr ** 
	getting host: "ha-472903-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903-m04:/home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 "sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 "sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt": exit status 1 (154.304179ms)

                                                
                                                
** stderr ** 
	ssh: "ha-472903-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 \"sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp testdata/cp-test.txt ha-472903-m04:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 cp testdata/cp-test.txt ha-472903-m04:/home/docker/cp-test.txt: exit status 1 (142.075163ms)

                                                
                                                
** stderr ** 
	getting host: "ha-472903-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 cp testdata/cp-test.txt ha-472903-m04:/home/docker/cp-test.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (145.092299ms)

                                                
                                                
** stderr ** 
	ssh: "ha-472903-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970888916/001/cp-test_ha-472903-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970888916/001/cp-test_ha-472903-m04.txt: exit status 1 (143.29525ms)

                                                
                                                
** stderr ** 
	getting host: "ha-472903-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970888916/001/cp-test_ha-472903-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (136.251328ms)

                                                
                                                
** stderr ** 
	ssh: "ha-472903-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:545: failed to read test file 'testdata/cp-test.txt' : open /tmp/TestMultiControlPlaneserialCopyFile2970888916/001/cp-test_ha-472903-m04.txt: no such file or directory
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903:/home/docker/cp-test_ha-472903-m04_ha-472903.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903:/home/docker/cp-test_ha-472903-m04_ha-472903.txt: exit status 1 (157.185927ms)

                                                
                                                
** stderr ** 
	getting host: "ha-472903-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903:/home/docker/cp-test_ha-472903-m04_ha-472903.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (137.334187ms)

                                                
                                                
** stderr ** 
	ssh: "ha-472903-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903 "sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903 "sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903.txt": exit status 1 (261.897203ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-472903-m04_ha-472903.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903 \"sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-472903-m04_ha-472903.txt: No such file or directory\r\n",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m02:/home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m02:/home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt: exit status 1 (162.366912ms)

                                                
                                                
** stderr ** 
	getting host: "ha-472903-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m02:/home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (138.781335ms)

                                                
                                                
** stderr ** 
	ssh: "ha-472903-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m02 "sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m02 "sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt": exit status 1 (262.986784ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m02 \"sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt: No such file or directory\r\n",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m03:/home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m03:/home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt: exit status 1 (166.800926ms)

                                                
                                                
** stderr ** 
	getting host: "ha-472903-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m03:/home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (141.770645ms)

                                                
                                                
** stderr ** 
	ssh: "ha-472903-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m03 "sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m03 "sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt": exit status 1 (258.07467ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-472903 ssh -n ha-472903-m03 \"sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt: No such file or directory\r\n",
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-472903
helpers_test.go:243: (dbg) docker inspect ha-472903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	        "Created": "2025-09-16T23:56:35.178831158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 804802,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:56:35.209552026Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hostname",
	        "HostsPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hosts",
	        "LogPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047-json.log",
	        "Name": "/ha-472903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-472903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-472903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	                "LowerDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-472903",
	                "Source": "/var/lib/docker/volumes/ha-472903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-472903",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-472903",
	                "name.minikube.sigs.k8s.io": "ha-472903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "abe382ce28757e80b5cdae91a64217d3672b21c23f3517480bd53105aeca147e",
	            "SandboxKey": "/var/run/docker/netns/abe382ce2875",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33544"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33545"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33548"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33546"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33547"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-472903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:42:9f:f6:50:c2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "22d49b2f397dfabc2a3967bd54b05204a52976e683f65ff07bff00e793040bef",
	                    "EndpointID": "4d4d83129a167c8183e8ef58cc6057f613d8d69adf59710ba6c623d1ff2970c6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-472903",
	                        "05f03528ecc5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-472903 -n ha-472903
helpers_test.go:252: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-472903 logs -n 25: (1.132642501s)
helpers_test.go:260: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ cp      │ ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970888916/001/cp-test_ha-472903-m03.txt │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ cp      │ ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903:/home/docker/cp-test_ha-472903-m03_ha-472903.txt                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903 sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903.txt                                                 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ cp      │ ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903-m02:/home/docker/cp-test_ha-472903-m03_ha-472903-m02.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m02 sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m02.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ cp      │ ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903-m04:/home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp testdata/cp-test.txt ha-472903-m04:/home/docker/cp-test.txt                                                             │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970888916/001/cp-test_ha-472903-m04.txt │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903:/home/docker/cp-test_ha-472903-m04_ha-472903.txt                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903.txt                                                 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m02:/home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m02 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m03:/home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:56:30
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:56:30.301112  804231 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:56:30.301322  804231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:30.301330  804231 out.go:374] Setting ErrFile to fd 2...
	I0916 23:56:30.301335  804231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:30.301535  804231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0916 23:56:30.302024  804231 out.go:368] Setting JSON to false
	I0916 23:56:30.302925  804231 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9532,"bootTime":1758057458,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:56:30.303027  804231 start.go:140] virtualization: kvm guest
	I0916 23:56:30.304965  804231 out.go:179] * [ha-472903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:56:30.306181  804231 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:56:30.306189  804231 notify.go:220] Checking for updates...
	I0916 23:56:30.308309  804231 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:56:30.309530  804231 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0916 23:56:30.310577  804231 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0916 23:56:30.311523  804231 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:56:30.312490  804231 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:56:30.313634  804231 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:56:30.336203  804231 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:56:30.336330  804231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:30.390690  804231 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:56:30.380521507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:30.390801  804231 docker.go:318] overlay module found
	I0916 23:56:30.392435  804231 out.go:179] * Using the docker driver based on user configuration
	I0916 23:56:30.393493  804231 start.go:304] selected driver: docker
	I0916 23:56:30.393505  804231 start.go:918] validating driver "docker" against <nil>
	I0916 23:56:30.393517  804231 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:56:30.394092  804231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:30.448140  804231 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:56:30.438500908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:30.448302  804231 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:56:30.448529  804231 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:56:30.450143  804231 out.go:179] * Using Docker driver with root privileges
	I0916 23:56:30.451156  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:30.451216  804231 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 23:56:30.451226  804231 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:56:30.451301  804231 start.go:348] cluster config:
	{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m
0s}
	I0916 23:56:30.452491  804231 out.go:179] * Starting "ha-472903" primary control-plane node in "ha-472903" cluster
	I0916 23:56:30.453469  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:56:30.454617  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:30.455626  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:30.455658  804231 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0916 23:56:30.455669  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:30.455737  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:30.455747  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:30.455875  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:56:30.456208  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:30.456245  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json: {Name:mkb16495f6ef626fa58a9600f3b4a943b5aaf14d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:30.475568  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:30.475587  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:30.475611  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:30.475644  804231 start.go:360] acquireMachinesLock for ha-472903: {Name:mk994658ce3314f2aed1dec341debc49d36a4326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:30.475759  804231 start.go:364] duration metric: took 97.738µs to acquireMachinesLock for "ha-472903"
	I0916 23:56:30.475786  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:30.475881  804231 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:56:30.477680  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:30.477953  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:56:30.477986  804231 client.go:168] LocalClient.Create starting
	I0916 23:56:30.478060  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:56:30.478097  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:30.478118  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:30.478203  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:56:30.478234  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:30.478247  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:30.478706  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:56:30.494743  804231 cli_runner.go:211] docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:56:30.494806  804231 network_create.go:284] running [docker network inspect ha-472903] to gather additional debugging logs...
	I0916 23:56:30.494829  804231 cli_runner.go:164] Run: docker network inspect ha-472903
	W0916 23:56:30.510851  804231 cli_runner.go:211] docker network inspect ha-472903 returned with exit code 1
	I0916 23:56:30.510886  804231 network_create.go:287] error running [docker network inspect ha-472903]: docker network inspect ha-472903: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-472903 not found
	I0916 23:56:30.510919  804231 network_create.go:289] output of [docker network inspect ha-472903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-472903 not found
	
	** /stderr **
	I0916 23:56:30.511007  804231 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:30.527272  804231 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b12870}
	I0916 23:56:30.527312  804231 network_create.go:124] attempt to create docker network ha-472903 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:56:30.527357  804231 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-472903 ha-472903
	I0916 23:56:30.581246  804231 network_create.go:108] docker network ha-472903 192.168.49.0/24 created
	I0916 23:56:30.581278  804231 kic.go:121] calculated static IP "192.168.49.2" for the "ha-472903" container
	I0916 23:56:30.581331  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:30.597113  804231 cli_runner.go:164] Run: docker volume create ha-472903 --label name.minikube.sigs.k8s.io=ha-472903 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:30.614615  804231 oci.go:103] Successfully created a docker volume ha-472903
	I0916 23:56:30.614694  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903 --entrypoint /usr/bin/test -v ha-472903:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:30.983301  804231 oci.go:107] Successfully prepared a docker volume ha-472903
	I0916 23:56:30.983346  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:30.983369  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:30.983457  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:56:35.109877  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.126378793s)
	I0916 23:56:35.109930  804231 kic.go:203] duration metric: took 4.126557088s to extract preloaded images to volume ...
	W0916 23:56:35.110010  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:56:35.110041  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:56:35.110081  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:56:35.162423  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903 --name ha-472903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903 --network ha-472903 --ip 192.168.49.2 --volume ha-472903:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:56:35.411448  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Running}}
	I0916 23:56:35.428877  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.447492  804231 cli_runner.go:164] Run: docker exec ha-472903 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:56:35.490145  804231 oci.go:144] the created container "ha-472903" has a running status.
	I0916 23:56:35.490177  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa...
	I0916 23:56:35.748917  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:56:35.748974  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:56:35.776040  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.795374  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:56:35.795403  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:56:35.841194  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.859165  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:56:35.859278  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:35.877348  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:35.877637  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:35.877654  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:56:36.014327  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0916 23:56:36.014356  804231 ubuntu.go:182] provisioning hostname "ha-472903"
	I0916 23:56:36.014430  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.033295  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:36.033543  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:36.033558  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903 && echo "ha-472903" | sudo tee /etc/hostname
	I0916 23:56:36.178557  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0916 23:56:36.178627  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.196584  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:36.196791  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:36.196814  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:56:36.331895  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:56:36.331954  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:56:36.331987  804231 ubuntu.go:190] setting up certificates
	I0916 23:56:36.332000  804231 provision.go:84] configureAuth start
	I0916 23:56:36.332062  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.350923  804231 provision.go:143] copyHostCerts
	I0916 23:56:36.350968  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:56:36.351011  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:56:36.351021  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:56:36.351100  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:56:36.351216  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:56:36.351254  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:56:36.351265  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:56:36.351307  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:56:36.351374  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:56:36.351400  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:56:36.351409  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:56:36.351461  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:56:36.351538  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903 san=[127.0.0.1 192.168.49.2 ha-472903 localhost minikube]
	I0916 23:56:36.406870  804231 provision.go:177] copyRemoteCerts
	I0916 23:56:36.406927  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:56:36.406977  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.424064  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.520663  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:56:36.520737  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:56:36.546100  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:56:36.546162  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 23:56:36.569886  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:56:36.569946  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:56:36.593694  804231 provision.go:87] duration metric: took 261.676108ms to configureAuth
	I0916 23:56:36.593725  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:56:36.593891  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:36.593903  804231 machine.go:96] duration metric: took 734.71199ms to provisionDockerMachine
	I0916 23:56:36.593911  804231 client.go:171] duration metric: took 6.115914604s to LocalClient.Create
	I0916 23:56:36.593933  804231 start.go:167] duration metric: took 6.115991162s to libmachine.API.Create "ha-472903"
	I0916 23:56:36.593942  804231 start.go:293] postStartSetup for "ha-472903" (driver="docker")
	I0916 23:56:36.593950  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:56:36.593994  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:56:36.594038  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.611127  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.708294  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:56:36.711629  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:56:36.711662  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:56:36.711669  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:56:36.711677  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:56:36.711690  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:56:36.711734  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:56:36.711817  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:56:36.711829  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:56:36.711917  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:56:36.720521  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:56:36.746614  804231 start.go:296] duration metric: took 152.657806ms for postStartSetup
	I0916 23:56:36.746970  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.763912  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:36.764159  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:56:36.764204  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.781099  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.872372  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:56:36.876670  804231 start.go:128] duration metric: took 6.400768235s to createHost
	I0916 23:56:36.876701  804231 start.go:83] releasing machines lock for "ha-472903", held for 6.400928988s
	I0916 23:56:36.876787  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.894080  804231 ssh_runner.go:195] Run: cat /version.json
	I0916 23:56:36.894094  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:56:36.894141  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.894182  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.912628  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.913001  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:37.079386  804231 ssh_runner.go:195] Run: systemctl --version
	I0916 23:56:37.084104  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:56:37.088563  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:56:37.116786  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:56:37.116846  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:56:37.142716  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:56:37.142738  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:56:37.142772  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:56:37.142832  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:56:37.154693  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:56:37.165920  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:56:37.165978  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:56:37.179227  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:56:37.192751  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:56:37.255915  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:56:37.324761  804231 docker.go:234] disabling docker service ...
	I0916 23:56:37.324836  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:56:37.342233  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:56:37.353324  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:56:37.420555  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:56:37.486396  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:56:37.497453  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:56:37.513435  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:56:37.524399  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:56:37.534072  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:56:37.534132  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:56:37.543872  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:56:37.553478  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:56:37.562918  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:56:37.572431  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:56:37.581176  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:56:37.590540  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:56:37.599825  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:56:37.609340  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:56:37.617500  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:56:37.625771  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:56:37.685687  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:56:37.787201  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:56:37.787275  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:56:37.791126  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:56:37.791200  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:56:37.794684  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:56:37.828753  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:56:37.828806  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:56:37.851610  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:56:37.876577  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:56:37.877711  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:37.894044  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:56:37.897995  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:56:37.909702  804231 kubeadm.go:875] updating cluster {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:56:37.909830  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:37.909936  804231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:56:37.943964  804231 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 23:56:37.943985  804231 containerd.go:534] Images already preloaded, skipping extraction
	I0916 23:56:37.944040  804231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:56:37.976374  804231 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 23:56:37.976397  804231 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:56:37.976405  804231 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0916 23:56:37.976525  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:56:37.976590  804231 ssh_runner.go:195] Run: sudo crictl info
	I0916 23:56:38.009585  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:38.009608  804231 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:56:38.009620  804231 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:56:38.009642  804231 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-472903 NodeName:ha-472903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:56:38.009740  804231 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-472903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:56:38.009763  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:56:38.009799  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:56:38.022796  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:56:38.022978  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:56:38.023041  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:56:38.032162  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:56:38.032241  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 23:56:38.040936  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 23:56:38.058672  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:56:38.079097  804231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0916 23:56:38.097183  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0916 23:56:38.116629  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:56:38.120221  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:56:38.131205  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:56:38.195735  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:56:38.216649  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.2
	I0916 23:56:38.216671  804231 certs.go:194] generating shared ca certs ...
	I0916 23:56:38.216692  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.216854  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:56:38.216907  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:56:38.216920  804231 certs.go:256] generating profile certs ...
	I0916 23:56:38.216989  804231 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:56:38.217007  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt with IP's: []
	I0916 23:56:38.286683  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt ...
	I0916 23:56:38.286713  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt: {Name:mk764ef4ac73429cea14d799835f3822d8afb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.286876  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key ...
	I0916 23:56:38.286887  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key: {Name:mk988f40b7ad20c61b4ffc19afd15eea50787a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.286965  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8
	I0916 23:56:38.286981  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0916 23:56:38.411782  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 ...
	I0916 23:56:38.411812  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8: {Name:mkbca9fcc4cd73eb913b43ef67240975ba048601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.411977  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8 ...
	I0916 23:56:38.411990  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8: {Name:mk56f7fb29011c6372caaf96dfdbcab1b202e8b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.412061  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:56:38.412138  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:56:38.412190  804231 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:56:38.412204  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt with IP's: []
	I0916 23:56:38.735728  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt ...
	I0916 23:56:38.735759  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt: {Name:mke25602938652bbe51197bb8e5738dfc5dca50b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.735935  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key ...
	I0916 23:56:38.735947  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key: {Name:mkc7d616357a8be8181d43ca8cb33ab512ce94dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.736027  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:56:38.736044  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:56:38.736055  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:56:38.736068  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:56:38.736078  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:56:38.736090  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:56:38.736105  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:56:38.736115  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:56:38.736175  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:56:38.736210  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:56:38.736218  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:56:38.736242  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:56:38.736266  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:56:38.736284  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:56:38.736322  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:56:38.736347  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:56:38.736360  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:38.736372  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:56:38.736905  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:56:38.762142  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:56:38.786590  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:56:38.810694  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:56:38.834521  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 23:56:38.858677  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:56:38.881975  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:56:38.906146  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:56:38.929698  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:56:38.955154  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:56:38.978551  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:56:39.001782  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:56:39.019405  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:56:39.024868  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:56:39.034165  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.038348  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.038407  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.045172  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:56:39.054735  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:56:39.065180  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.068976  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.069038  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.075920  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:56:39.085838  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:56:39.095394  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.098966  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.099019  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.105643  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:56:39.114800  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:56:39.117988  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:56:39.118033  804231 kubeadm.go:392] StartCluster: {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:56:39.118097  804231 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 23:56:39.118132  804231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 23:56:39.154291  804231 cri.go:89] found id: ""
	I0916 23:56:39.154361  804231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:56:39.163485  804231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:56:39.172454  804231 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:56:39.172499  804231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:56:39.181066  804231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:56:39.181098  804231 kubeadm.go:157] found existing configuration files:
	
	I0916 23:56:39.181131  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:56:39.189824  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:56:39.189873  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:56:39.198165  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:56:39.206772  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:56:39.206819  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:56:39.215119  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:56:39.223660  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:56:39.223717  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:56:39.232099  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:56:39.240514  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:56:39.240559  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:56:39.248850  804231 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:56:39.285897  804231 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:56:39.285950  804231 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:56:39.300660  804231 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:56:39.300727  804231 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:56:39.300801  804231 kubeadm.go:310] OS: Linux
	I0916 23:56:39.300901  804231 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:56:39.300975  804231 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:56:39.301037  804231 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:56:39.301080  804231 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:56:39.301127  804231 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:56:39.301169  804231 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:56:39.301211  804231 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:56:39.301268  804231 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:56:39.351787  804231 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:56:39.351909  804231 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:56:39.351995  804231 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:56:39.358062  804231 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:56:39.360794  804231 out.go:252]   - Generating certificates and keys ...
	I0916 23:56:39.360906  804231 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:56:39.360984  804231 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:56:39.805287  804231 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:56:40.002708  804231 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:56:40.279763  804231 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:56:40.813028  804231 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:56:41.074848  804231 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:56:41.075343  804231 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-472903 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:56:41.124880  804231 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:56:41.125041  804231 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-472903 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:56:41.707716  804231 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:56:42.089212  804231 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:56:42.627038  804231 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:56:42.627119  804231 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:56:42.823901  804231 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:56:43.022989  804231 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:56:43.163778  804231 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:56:43.708743  804231 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:56:44.024642  804231 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:56:44.025130  804231 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:56:44.027319  804231 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:56:44.029599  804231 out.go:252]   - Booting up control plane ...
	I0916 23:56:44.029737  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:56:44.029842  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:56:44.030181  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:56:44.039957  804231 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:56:44.040118  804231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:56:44.047794  804231 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:56:44.048177  804231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:56:44.048269  804231 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:56:44.122629  804231 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:56:44.122739  804231 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:56:45.124352  804231 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001822735s
	I0916 23:56:45.127338  804231 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:56:45.127477  804231 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:56:45.127582  804231 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:56:45.127694  804231 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:56:47.478256  804231 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.350892202s
	I0916 23:56:47.717698  804231 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.590223043s
	I0916 23:56:49.129161  804231 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001748341s
	I0916 23:56:49.140036  804231 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:56:49.148779  804231 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:56:49.158010  804231 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:56:49.158279  804231 kubeadm.go:310] [mark-control-plane] Marking the node ha-472903 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:56:49.165085  804231 kubeadm.go:310] [bootstrap-token] Using token: 4apri1.yqe8ok7wc4ltba21
	I0916 23:56:49.166180  804231 out.go:252]   - Configuring RBAC rules ...
	I0916 23:56:49.166328  804231 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:56:49.169225  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:56:49.174527  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:56:49.176741  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:56:49.178892  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:56:49.181107  804231 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:56:49.534440  804231 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:56:49.948567  804231 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:56:50.534581  804231 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:56:50.535429  804231 kubeadm.go:310] 
	I0916 23:56:50.535529  804231 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:56:50.535542  804231 kubeadm.go:310] 
	I0916 23:56:50.535650  804231 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:56:50.535660  804231 kubeadm.go:310] 
	I0916 23:56:50.535696  804231 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:56:50.535801  804231 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:56:50.535858  804231 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:56:50.535872  804231 kubeadm.go:310] 
	I0916 23:56:50.535940  804231 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:56:50.535949  804231 kubeadm.go:310] 
	I0916 23:56:50.536027  804231 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:56:50.536037  804231 kubeadm.go:310] 
	I0916 23:56:50.536125  804231 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:56:50.536212  804231 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:56:50.536280  804231 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:56:50.536286  804231 kubeadm.go:310] 
	I0916 23:56:50.536356  804231 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:56:50.536441  804231 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:56:50.536448  804231 kubeadm.go:310] 
	I0916 23:56:50.536543  804231 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4apri1.yqe8ok7wc4ltba21 \
	I0916 23:56:50.536688  804231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 \
	I0916 23:56:50.536722  804231 kubeadm.go:310] 	--control-plane 
	I0916 23:56:50.536731  804231 kubeadm.go:310] 
	I0916 23:56:50.536842  804231 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:56:50.536857  804231 kubeadm.go:310] 
	I0916 23:56:50.536947  804231 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4apri1.yqe8ok7wc4ltba21 \
	I0916 23:56:50.537084  804231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 
	I0916 23:56:50.539097  804231 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:56:50.539238  804231 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:56:50.539264  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:50.539274  804231 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:56:50.540523  804231 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:56:50.541480  804231 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:56:50.545518  804231 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:56:50.545534  804231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:56:50.563251  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:56:50.762002  804231 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:56:50.762092  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:50.762127  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903 minikube.k8s.io/updated_at=2025_09_16T23_56_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=true
	I0916 23:56:50.771679  804231 ops.go:34] apiserver oom_adj: -16
	I0916 23:56:50.843646  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:51.344428  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:51.844440  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:52.344316  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:52.844594  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:53.343854  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:53.844615  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:54.344057  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:54.844066  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.344374  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.844478  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.927027  804231 kubeadm.go:1105] duration metric: took 5.165002596s to wait for elevateKubeSystemPrivileges
	I0916 23:56:55.927062  804231 kubeadm.go:394] duration metric: took 16.809033965s to StartCluster
	I0916 23:56:55.927081  804231 settings.go:142] acquiring lock: {Name:mk6c1a5bee23e141aad5180323c16c47ed580ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:55.927146  804231 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0916 23:56:55.927785  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:55.928026  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:56:55.928018  804231 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:55.928038  804231 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 23:56:55.928103  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:56:55.928121  804231 addons.go:69] Setting default-storageclass=true in profile "ha-472903"
	I0916 23:56:55.928148  804231 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-472903"
	I0916 23:56:55.928126  804231 addons.go:69] Setting storage-provisioner=true in profile "ha-472903"
	I0916 23:56:55.928222  804231 addons.go:238] Setting addon storage-provisioner=true in "ha-472903"
	I0916 23:56:55.928269  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:56:55.928296  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:55.928610  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.928740  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.954806  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:56:55.955519  804231 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0916 23:56:55.955545  804231 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0916 23:56:55.955543  804231 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0916 23:56:55.955553  804231 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 23:56:55.955611  804231 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0916 23:56:55.955620  804231 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 23:56:55.956096  804231 addons.go:238] Setting addon default-storageclass=true in "ha-472903"
	I0916 23:56:55.956145  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:56:55.956685  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.957279  804231 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:56:55.961536  804231 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:56:55.961557  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:56:55.961614  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:55.979896  804231 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:56:55.979925  804231 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:56:55.979985  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:55.982838  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:55.999402  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:56.011618  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:56:56.095355  804231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:56:56.110814  804231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:56:56.153646  804231 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:56:56.360175  804231 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0916 23:56:56.361116  804231 addons.go:514] duration metric: took 433.076562ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 23:56:56.361149  804231 start.go:246] waiting for cluster config update ...
	I0916 23:56:56.361163  804231 start.go:255] writing updated cluster config ...
	I0916 23:56:56.362407  804231 out.go:203] 
	I0916 23:56:56.363527  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:56.363621  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:56.364993  804231 out.go:179] * Starting "ha-472903-m02" control-plane node in "ha-472903" cluster
	I0916 23:56:56.365873  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:56:56.366751  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:56.367539  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:56.367556  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:56.367630  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:56.367646  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:56.367654  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:56:56.367711  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:56.386547  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:56.386565  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:56.386580  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:56.386607  804231 start.go:360] acquireMachinesLock for ha-472903-m02: {Name:mk81d8c73856cf84ceff1767a1681f3f3cdab773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:56.386700  804231 start.go:364] duration metric: took 70.184µs to acquireMachinesLock for "ha-472903-m02"
	I0916 23:56:56.386738  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:56.386824  804231 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 23:56:56.388402  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:56.388536  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:56:56.388563  804231 client.go:168] LocalClient.Create starting
	I0916 23:56:56.388626  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:56:56.388664  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:56.388687  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:56.388757  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:56:56.388789  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:56.388804  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:56.389042  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:56.404624  804231 network_create.go:77] Found existing network {name:ha-472903 subnet:0xc001d2d140 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:56:56.404653  804231 kic.go:121] calculated static IP "192.168.49.3" for the "ha-472903-m02" container
	I0916 23:56:56.404719  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:56.420231  804231 cli_runner.go:164] Run: docker volume create ha-472903-m02 --label name.minikube.sigs.k8s.io=ha-472903-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:56.436361  804231 oci.go:103] Successfully created a docker volume ha-472903-m02
	I0916 23:56:56.436430  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m02 --entrypoint /usr/bin/test -v ha-472903-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:56.943375  804231 oci.go:107] Successfully prepared a docker volume ha-472903-m02
	I0916 23:56:56.943427  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:56.943455  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:56.943528  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:01.091161  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.147592491s)
	I0916 23:57:01.091197  804231 kic.go:203] duration metric: took 4.147738136s to extract preloaded images to volume ...
	W0916 23:57:01.091312  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:01.091355  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:01.091403  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:01.142900  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903-m02 --name ha-472903-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903-m02 --network ha-472903 --ip 192.168.49.3 --volume ha-472903-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:01.378924  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Running}}
	I0916 23:57:01.396232  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.412927  804231 cli_runner.go:164] Run: docker exec ha-472903-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:01.469205  804231 oci.go:144] the created container "ha-472903-m02" has a running status.
	I0916 23:57:01.469235  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa...
	I0916 23:57:01.517570  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:01.517621  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:01.540818  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.560831  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:01.560858  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:01.615037  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.637921  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:01.638030  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.659741  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.660056  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.660078  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:01.800716  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0916 23:57:01.800749  804231 ubuntu.go:182] provisioning hostname "ha-472903-m02"
	I0916 23:57:01.800817  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.819791  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.820013  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.820030  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m02 && echo "ha-472903-m02" | sudo tee /etc/hostname
	I0916 23:57:01.967539  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0916 23:57:01.967631  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.987814  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.988031  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.988047  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:02.121536  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:02.121571  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:57:02.121588  804231 ubuntu.go:190] setting up certificates
	I0916 23:57:02.121602  804231 provision.go:84] configureAuth start
	I0916 23:57:02.121663  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.139056  804231 provision.go:143] copyHostCerts
	I0916 23:57:02.139098  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:02.139135  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:57:02.139147  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:02.139221  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:57:02.139329  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:02.139362  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:57:02.139372  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:02.139430  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:57:02.139521  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:02.139549  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:57:02.139559  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:02.139599  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:57:02.139690  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m02 san=[127.0.0.1 192.168.49.3 ha-472903-m02 localhost minikube]
	I0916 23:57:02.262354  804231 provision.go:177] copyRemoteCerts
	I0916 23:57:02.262428  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:02.262491  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.279792  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.375833  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:02.375903  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:02.400316  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:02.400373  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:02.422506  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:02.422550  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:57:02.445091  804231 provision.go:87] duration metric: took 323.464176ms to configureAuth
	I0916 23:57:02.445121  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:02.445295  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:02.445313  804231 machine.go:96] duration metric: took 807.372883ms to provisionDockerMachine
	I0916 23:57:02.445320  804231 client.go:171] duration metric: took 6.056751196s to LocalClient.Create
	I0916 23:57:02.445337  804231 start.go:167] duration metric: took 6.056804276s to libmachine.API.Create "ha-472903"
	I0916 23:57:02.445346  804231 start.go:293] postStartSetup for "ha-472903-m02" (driver="docker")
	I0916 23:57:02.445354  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:02.445402  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:02.445461  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.463550  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.559528  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:02.562755  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:02.562780  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:02.562787  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:02.562793  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:02.562803  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:57:02.562847  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:57:02.562920  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:57:02.562930  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:57:02.563018  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:02.571142  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:02.596466  804231 start.go:296] duration metric: took 151.106324ms for postStartSetup
	I0916 23:57:02.596768  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.613316  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:02.613561  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:02.613601  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.632056  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.723085  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:02.727430  804231 start.go:128] duration metric: took 6.340577447s to createHost
	I0916 23:57:02.727453  804231 start.go:83] releasing machines lock for "ha-472903-m02", held for 6.34073897s
	I0916 23:57:02.727519  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.746152  804231 out.go:179] * Found network options:
	I0916 23:57:02.747248  804231 out.go:179]   - NO_PROXY=192.168.49.2
	W0916 23:57:02.748187  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:02.748240  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:02.748311  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:02.748360  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.748367  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:02.748427  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.765286  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.766625  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.856922  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:02.936692  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:02.936761  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:02.961822  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:02.961845  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:57:02.961878  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:02.961919  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:57:02.973318  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:02.983927  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:57:02.983969  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:57:02.996091  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:57:03.009314  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:57:03.072565  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:57:03.140469  804231 docker.go:234] disabling docker service ...
	I0916 23:57:03.140526  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:57:03.157179  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:57:03.167955  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:57:03.233386  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:57:03.296537  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:03.307574  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:03.323754  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:03.334305  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:03.343767  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:03.343826  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:03.353029  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:03.361991  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:03.371206  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:03.380598  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:03.389216  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:03.398125  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:03.407145  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:03.416183  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:03.424123  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:03.432185  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:03.493561  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:03.591942  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:57:03.592010  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:57:03.595710  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:57:03.595768  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:57:03.599108  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:03.633181  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:57:03.633231  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:03.656364  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:03.680150  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:57:03.681177  804231 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:03.682053  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:03.699720  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:03.703306  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:03.714275  804231 mustload.go:65] Loading cluster: ha-472903
	I0916 23:57:03.714452  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:03.714650  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:57:03.730631  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:03.730849  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.3
	I0916 23:57:03.730859  804231 certs.go:194] generating shared ca certs ...
	I0916 23:57:03.730877  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.730987  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:57:03.731023  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:57:03.731032  804231 certs.go:256] generating profile certs ...
	I0916 23:57:03.731092  804231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:57:03.731114  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a
	I0916 23:57:03.731125  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 23:57:03.830248  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a ...
	I0916 23:57:03.830275  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a: {Name:mk3e97859392ca0d50685e4c31c19acd3c590753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.830438  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a ...
	I0916 23:57:03.830453  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a: {Name:mkd3ec6288ef831df369d4ec39839c410f5116ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.830530  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:57:03.830653  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:57:03.830779  804231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:57:03.830794  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:03.830809  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:03.830823  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:03.830836  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:03.830846  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:03.830855  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:03.830864  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:03.830873  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:03.830920  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:57:03.830952  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:03.830962  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:03.830981  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:03.831001  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:03.831021  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:57:03.831058  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:03.831081  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:57:03.831094  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:57:03.831107  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:03.831156  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:03.847964  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:03.934599  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:03.938331  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:03.950286  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:03.953541  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:03.965169  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:03.968351  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:03.979814  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:03.982969  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:03.993972  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:03.997171  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:04.008607  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:04.011687  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 23:57:04.023019  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:04.046509  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:04.069781  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:04.092702  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:04.114933  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 23:57:04.137173  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0916 23:57:04.159280  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:04.181367  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:04.203980  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:57:04.230248  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:57:04.253628  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:04.276223  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:04.293552  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:04.309978  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:04.326237  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:04.342704  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:04.359099  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 23:57:04.375242  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:04.391611  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:57:04.396637  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:57:04.405389  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.408604  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.408651  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.414862  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:04.423583  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:57:04.432421  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.435706  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.435752  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.441863  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:04.450595  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:04.459588  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.462866  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.462907  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.469279  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:04.478135  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:04.481236  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:04.481288  804231 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0916 23:57:04.481383  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:04.481425  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:04.481462  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:04.492937  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:04.492999  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:04.493041  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:04.501084  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:04.501123  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:04.509217  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 23:57:04.525587  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:04.544042  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:04.561542  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:04.564725  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:04.574819  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:04.638378  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:04.659569  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:04.659878  804231 start.go:317] joinCluster: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:04.659986  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:04.660033  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:04.678136  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:04.817608  804231 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:04.817663  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 79akng.11lpa8n1ba4yh5m1 --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0916 23:57:23.327384  804231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 79akng.11lpa8n1ba4yh5m1 --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.509693377s)
	I0916 23:57:23.327447  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:23.521334  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903-m02 minikube.k8s.io/updated_at=2025_09_16T23_57_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=false
	I0916 23:57:23.592991  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-472903-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:23.664899  804231 start.go:319] duration metric: took 19.005017018s to joinCluster
	I0916 23:57:23.664975  804231 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:23.665223  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:23.665877  804231 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:23.666680  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:23.766393  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:23.779164  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:23.779228  804231 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:23.779511  804231 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m02" to be "Ready" ...
	I0916 23:57:24.283593  804231 node_ready.go:49] node "ha-472903-m02" is "Ready"
	I0916 23:57:24.283628  804231 node_ready.go:38] duration metric: took 504.097895ms for node "ha-472903-m02" to be "Ready" ...
	I0916 23:57:24.283648  804231 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:24.283699  804231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:24.295735  804231 api_server.go:72] duration metric: took 630.723924ms to wait for apiserver process to appear ...
	I0916 23:57:24.295758  804231 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:24.295774  804231 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:24.299650  804231 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:24.300537  804231 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:24.300558  804231 api_server.go:131] duration metric: took 4.795429ms to wait for apiserver health ...
	I0916 23:57:24.300566  804231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:24.304572  804231 system_pods.go:59] 19 kube-system pods found
	I0916 23:57:24.304598  804231 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:24.304604  804231 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:24.304608  804231 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:24.304611  804231 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Pending
	I0916 23:57:24.304615  804231 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:24.304621  804231 system_pods.go:61] "kindnet-mwf8l" [8c9533d3-defe-487b-a9b4-0502fb8f2d2a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mwf8l": pod kindnet-mwf8l is being deleted, cannot be assigned to a host)
	I0916 23:57:24.304628  804231 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-q7c7s": pod kindnet-q7c7s is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304639  804231 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:24.304643  804231 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Pending
	I0916 23:57:24.304646  804231 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:24.304650  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Pending
	I0916 23:57:24.304657  804231 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-58lkb": pod kube-proxy-58lkb is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304662  804231 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:24.304666  804231 system_pods.go:61] "kube-proxy-mf26q" [34502b32-75c1-4078-abd2-4e4d625252d8] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-mf26q": pod kube-proxy-mf26q is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304670  804231 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:24.304677  804231 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Pending
	I0916 23:57:24.304679  804231 system_pods.go:61] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:24.304682  804231 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Pending
	I0916 23:57:24.304687  804231 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:24.304694  804231 system_pods.go:74] duration metric: took 4.122792ms to wait for pod list to return data ...
	I0916 23:57:24.304700  804231 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:24.307165  804231 default_sa.go:45] found service account: "default"
	I0916 23:57:24.307183  804231 default_sa.go:55] duration metric: took 2.474442ms for default service account to be created ...
	I0916 23:57:24.307190  804231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:24.310491  804231 system_pods.go:86] 19 kube-system pods found
	I0916 23:57:24.310512  804231 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:24.310517  804231 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:24.310520  804231 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:24.310524  804231 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Pending
	I0916 23:57:24.310527  804231 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:24.310532  804231 system_pods.go:89] "kindnet-mwf8l" [8c9533d3-defe-487b-a9b4-0502fb8f2d2a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mwf8l": pod kindnet-mwf8l is being deleted, cannot be assigned to a host)
	I0916 23:57:24.310556  804231 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-q7c7s": pod kindnet-q7c7s is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310566  804231 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:24.310571  804231 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Pending
	I0916 23:57:24.310576  804231 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:24.310580  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Pending
	I0916 23:57:24.310588  804231 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-58lkb": pod kube-proxy-58lkb is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310591  804231 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:24.310596  804231 system_pods.go:89] "kube-proxy-mf26q" [34502b32-75c1-4078-abd2-4e4d625252d8] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-mf26q": pod kube-proxy-mf26q is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310600  804231 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:24.310603  804231 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Pending
	I0916 23:57:24.310608  804231 system_pods.go:89] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:24.310611  804231 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Pending
	I0916 23:57:24.310614  804231 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:24.310621  804231 system_pods.go:126] duration metric: took 3.426124ms to wait for k8s-apps to be running ...
	I0916 23:57:24.310629  804231 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:24.310666  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:24.322152  804231 system_svc.go:56] duration metric: took 11.515834ms WaitForService to wait for kubelet
	I0916 23:57:24.322176  804231 kubeadm.go:578] duration metric: took 657.167547ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:24.322199  804231 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:24.327707  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:24.327734  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:24.327748  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:24.327754  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:24.327759  804231 node_conditions.go:105] duration metric: took 5.554046ms to run NodePressure ...
	I0916 23:57:24.327772  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:57:24.327803  804231 start.go:255] writing updated cluster config ...
	I0916 23:57:24.329316  804231 out.go:203] 
	I0916 23:57:24.330356  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:24.330485  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:24.331956  804231 out.go:179] * Starting "ha-472903-m03" control-plane node in "ha-472903" cluster
	I0916 23:57:24.332973  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:57:24.333962  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:24.334852  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:57:24.334875  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:24.334942  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:24.334986  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:24.334997  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:57:24.335117  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:24.357217  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:24.357233  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:24.357242  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:24.357267  804231 start.go:360] acquireMachinesLock for ha-472903-m03: {Name:mk61000bb8e4699ca3310a7fc257e30a156b69de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:24.357354  804231 start.go:364] duration metric: took 71.354µs to acquireMachinesLock for "ha-472903-m03"
	I0916 23:57:24.357375  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:24.357498  804231 start.go:125] createHost starting for "m03" (driver="docker")
	I0916 23:57:24.358917  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:24.358994  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:57:24.359023  804231 client.go:168] LocalClient.Create starting
	I0916 23:57:24.359071  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:57:24.359103  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:24.359116  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:24.359164  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:57:24.359182  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:24.359192  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:24.359366  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:24.375654  804231 network_create.go:77] Found existing network {name:ha-472903 subnet:0xc001b33bf0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:24.375684  804231 kic.go:121] calculated static IP "192.168.49.4" for the "ha-472903-m03" container
	I0916 23:57:24.375740  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:24.392165  804231 cli_runner.go:164] Run: docker volume create ha-472903-m03 --label name.minikube.sigs.k8s.io=ha-472903-m03 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:24.408273  804231 oci.go:103] Successfully created a docker volume ha-472903-m03
	I0916 23:57:24.408342  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m03 --entrypoint /usr/bin/test -v ha-472903-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:24.957699  804231 oci.go:107] Successfully prepared a docker volume ha-472903-m03
	I0916 23:57:24.957748  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:57:24.957783  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:24.957856  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:29.095091  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.13717471s)
	I0916 23:57:29.095123  804231 kic.go:203] duration metric: took 4.137337977s to extract preloaded images to volume ...
	W0916 23:57:29.095214  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:29.095253  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:29.095300  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:29.145859  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903-m03 --name ha-472903-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903-m03 --network ha-472903 --ip 192.168.49.4 --volume ha-472903-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:29.392873  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Running}}
	I0916 23:57:29.412389  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:29.430593  804231 cli_runner.go:164] Run: docker exec ha-472903-m03 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:29.476672  804231 oci.go:144] the created container "ha-472903-m03" has a running status.
	I0916 23:57:29.476707  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa...
	I0916 23:57:29.927926  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:29.927968  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:29.954518  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:29.975503  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:29.975522  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:30.023965  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:30.040966  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:30.041051  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.058157  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.058388  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.058400  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:30.190964  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0916 23:57:30.190995  804231 ubuntu.go:182] provisioning hostname "ha-472903-m03"
	I0916 23:57:30.191059  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.208862  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.209123  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.209144  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m03 && echo "ha-472903-m03" | sudo tee /etc/hostname
	I0916 23:57:30.354363  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0916 23:57:30.354466  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.372285  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.372570  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.372590  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:30.504861  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:30.504898  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:57:30.504920  804231 ubuntu.go:190] setting up certificates
	I0916 23:57:30.504933  804231 provision.go:84] configureAuth start
	I0916 23:57:30.504996  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:30.522218  804231 provision.go:143] copyHostCerts
	I0916 23:57:30.522259  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:30.522297  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:57:30.522306  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:30.522369  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:57:30.522483  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:30.522506  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:57:30.522510  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:30.522547  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:57:30.522650  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:30.522673  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:57:30.522678  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:30.522703  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:57:30.522769  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m03 san=[127.0.0.1 192.168.49.4 ha-472903-m03 localhost minikube]
	I0916 23:57:30.644066  804231 provision.go:177] copyRemoteCerts
	I0916 23:57:30.644118  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:30.644153  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.661612  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:30.757452  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:30.757504  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:30.782942  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:30.782994  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:30.806508  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:30.806562  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:30.829686  804231 provision.go:87] duration metric: took 324.735799ms to configureAuth
	I0916 23:57:30.829709  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:30.829902  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:30.829916  804231 machine.go:96] duration metric: took 788.930334ms to provisionDockerMachine
	I0916 23:57:30.829925  804231 client.go:171] duration metric: took 6.470893656s to LocalClient.Create
	I0916 23:57:30.829958  804231 start.go:167] duration metric: took 6.470963089s to libmachine.API.Create "ha-472903"
	I0916 23:57:30.829971  804231 start.go:293] postStartSetup for "ha-472903-m03" (driver="docker")
	I0916 23:57:30.829982  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:30.830042  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:30.830092  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.847215  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:30.945849  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:30.949055  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:30.949086  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:30.949098  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:30.949107  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:30.949120  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:57:30.949174  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:57:30.949274  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:57:30.949286  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:57:30.949392  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:30.957998  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:30.983779  804231 start.go:296] duration metric: took 153.794843ms for postStartSetup
	I0916 23:57:30.984109  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:31.001367  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:31.001618  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:31.001659  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.019034  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.110814  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:31.115046  804231 start.go:128] duration metric: took 6.757532739s to createHost
	I0916 23:57:31.115072  804231 start.go:83] releasing machines lock for "ha-472903-m03", held for 6.757707303s
	I0916 23:57:31.115154  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:31.133371  804231 out.go:179] * Found network options:
	I0916 23:57:31.134481  804231 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 23:57:31.135570  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135598  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135626  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135644  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:31.135714  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:31.135763  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.135778  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:31.135845  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.152320  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.153909  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.320495  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:31.348141  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:31.348214  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:31.373693  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:31.373720  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:57:31.373748  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:31.373802  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:57:31.385560  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:31.396165  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:57:31.396214  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:57:31.409119  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:57:31.422244  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:57:31.489491  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:57:31.557098  804231 docker.go:234] disabling docker service ...
	I0916 23:57:31.557149  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:57:31.574601  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:57:31.585773  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:57:31.649988  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:57:31.717070  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:31.727904  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:31.743685  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:31.755962  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:31.766072  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:31.766138  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:31.775522  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:31.785914  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:31.795134  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:31.804565  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:31.813319  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:31.822500  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:31.831597  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:31.840887  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:31.848842  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:31.857026  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:31.920521  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:32.022746  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:57:32.022804  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:57:32.026838  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:57:32.026888  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:57:32.030295  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:32.064100  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:57:32.064158  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:32.088276  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:32.114182  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:57:32.115194  804231 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:32.116236  804231 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 23:57:32.117151  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:32.133290  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:32.136901  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:32.147860  804231 mustload.go:65] Loading cluster: ha-472903
	I0916 23:57:32.148060  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:32.148275  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:57:32.164278  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:32.164570  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.4
	I0916 23:57:32.164584  804231 certs.go:194] generating shared ca certs ...
	I0916 23:57:32.164601  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.164751  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:57:32.164800  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:57:32.164814  804231 certs.go:256] generating profile certs ...
	I0916 23:57:32.164911  804231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:57:32.164940  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8
	I0916 23:57:32.164958  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 23:57:32.342596  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 ...
	I0916 23:57:32.342623  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8: {Name:mk455c3f0ae4544ddcdf75c25cbd1b87a24e61a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.342787  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8 ...
	I0916 23:57:32.342799  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8: {Name:mkbd551bf9ae23c129f7e263550d20b4aac5d095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.342871  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:57:32.343007  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:57:32.343136  804231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:57:32.343152  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:32.343165  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:32.343178  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:32.343191  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:32.343204  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:32.343214  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:32.343229  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:32.343247  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:32.343299  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:57:32.343327  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:32.343337  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:32.343357  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:32.343379  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:32.343400  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:57:32.343464  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:32.343501  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.343521  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.343534  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.343588  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:32.360782  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:32.447595  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:32.451217  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:32.464033  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:32.467273  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:32.478860  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:32.482180  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:32.493717  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:32.496761  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:32.507849  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:32.511054  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:32.523733  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:32.526954  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 23:57:32.538314  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:32.561866  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:32.585900  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:32.610048  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:32.634812  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 23:57:32.659163  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:32.682157  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:32.704663  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:32.727856  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:32.752740  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:57:32.775900  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:57:32.798720  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:32.815542  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:32.832241  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:32.848964  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:32.865780  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:32.882614  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 23:57:32.899296  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:32.916516  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:57:32.921611  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:57:32.930917  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.934241  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.934283  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.941354  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:32.950335  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:32.959292  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.962576  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.962623  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.968989  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:32.978331  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:57:32.987188  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.990463  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.990497  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.996813  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:33.005924  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:33.009122  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:33.009183  804231 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0916 23:57:33.009266  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:33.009291  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:33.009319  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:33.021189  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:33.021246  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:33.021293  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:33.029533  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:33.029576  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:33.038861  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 23:57:33.056092  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:33.075506  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:33.093918  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:33.097171  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:33.107668  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:33.167706  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:33.188453  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:33.188671  804231 start.go:317] joinCluster: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:33.188781  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:33.188819  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:33.210165  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:33.351871  804231 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:33.351930  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uj456s.97hymgg3kmg6owuv --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0916 23:57:51.860237  804231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uj456s.97hymgg3kmg6owuv --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (18.508258539s)
	I0916 23:57:51.860308  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:52.080986  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903-m03 minikube.k8s.io/updated_at=2025_09_16T23_57_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=false
	I0916 23:57:52.152525  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-472903-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:52.226560  804231 start.go:319] duration metric: took 19.037884553s to joinCluster
	I0916 23:57:52.226624  804231 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:52.226912  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:52.227744  804231 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:52.228620  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:52.334638  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:52.349036  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:52.349105  804231 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:52.349317  804231 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m03" to be "Ready" ...
	I0916 23:57:54.352346  804231 node_ready.go:49] node "ha-472903-m03" is "Ready"
	I0916 23:57:54.352374  804231 node_ready.go:38] duration metric: took 2.003044453s for node "ha-472903-m03" to be "Ready" ...
	I0916 23:57:54.352389  804231 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:54.352476  804231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:54.365259  804231 api_server.go:72] duration metric: took 2.138606454s to wait for apiserver process to appear ...
	I0916 23:57:54.365280  804231 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:54.365298  804231 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:54.370985  804231 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:54.371831  804231 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:54.371850  804231 api_server.go:131] duration metric: took 6.564025ms to wait for apiserver health ...
	I0916 23:57:54.371858  804231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:54.376785  804231 system_pods.go:59] 27 kube-system pods found
	I0916 23:57:54.376811  804231 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:54.376815  804231 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:54.376818  804231 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:54.376822  804231 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0916 23:57:54.376824  804231 system_pods.go:61] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Pending
	I0916 23:57:54.376830  804231 system_pods.go:61] "kindnet-2dqnn" [f5c4164d-0d88-4b7b-bc52-18a7e211fe98] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2dqnn": pod kindnet-2dqnn is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376833  804231 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:54.376838  804231 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0916 23:57:54.376842  804231 system_pods.go:61] "kindnet-wwdfr" [e86a6e30-712e-4d39-a235-87489d16c0f3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wwdfr": pod kindnet-wwdfr is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376849  804231 system_pods.go:61] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Pending: PodScheduled:SchedulerError (pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) is in the cache, so can't be assumed)
	I0916 23:57:54.376853  804231 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:54.376858  804231 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running
	I0916 23:57:54.376861  804231 system_pods.go:61] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Pending
	I0916 23:57:54.376867  804231 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:54.376870  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0916 23:57:54.376873  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Pending
	I0916 23:57:54.376876  804231 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0916 23:57:54.376881  804231 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:54.376885  804231 system_pods.go:61] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-kn6nb": pod kube-proxy-kn6nb is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376889  804231 system_pods.go:61] "kube-proxy-xhlnz" [1967fed1-7529-46d0-accd-ab74751b47fa] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-xhlnz": pod kube-proxy-xhlnz is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376894  804231 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:54.376897  804231 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0916 23:57:54.376900  804231 system_pods.go:61] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Pending
	I0916 23:57:54.376904  804231 system_pods.go:61] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:54.376907  804231 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0916 23:57:54.376910  804231 system_pods.go:61] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Pending
	I0916 23:57:54.376913  804231 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:54.376918  804231 system_pods.go:74] duration metric: took 5.052009ms to wait for pod list to return data ...
	I0916 23:57:54.376925  804231 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:54.378969  804231 default_sa.go:45] found service account: "default"
	I0916 23:57:54.378989  804231 default_sa.go:55] duration metric: took 2.056584ms for default service account to be created ...
	I0916 23:57:54.378999  804231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:54.383753  804231 system_pods.go:86] 27 kube-system pods found
	I0916 23:57:54.383781  804231 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:54.383790  804231 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:54.383796  804231 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:54.383802  804231 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0916 23:57:54.383812  804231 system_pods.go:89] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Pending
	I0916 23:57:54.383821  804231 system_pods.go:89] "kindnet-2dqnn" [f5c4164d-0d88-4b7b-bc52-18a7e211fe98] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2dqnn": pod kindnet-2dqnn is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383829  804231 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:54.383837  804231 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0916 23:57:54.383842  804231 system_pods.go:89] "kindnet-wwdfr" [e86a6e30-712e-4d39-a235-87489d16c0f3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wwdfr": pod kindnet-wwdfr is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383852  804231 system_pods.go:89] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Pending: PodScheduled:SchedulerError (pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) is in the cache, so can't be assumed)
	I0916 23:57:54.383863  804231 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:54.383874  804231 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running
	I0916 23:57:54.383881  804231 system_pods.go:89] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Pending
	I0916 23:57:54.383887  804231 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:54.383895  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0916 23:57:54.383900  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Pending
	I0916 23:57:54.383908  804231 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0916 23:57:54.383913  804231 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:54.383921  804231 system_pods.go:89] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-kn6nb": pod kube-proxy-kn6nb is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383930  804231 system_pods.go:89] "kube-proxy-xhlnz" [1967fed1-7529-46d0-accd-ab74751b47fa] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-xhlnz": pod kube-proxy-xhlnz is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383939  804231 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:54.383946  804231 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0916 23:57:54.383955  804231 system_pods.go:89] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Pending
	I0916 23:57:54.383962  804231 system_pods.go:89] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:54.383967  804231 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0916 23:57:54.383975  804231 system_pods.go:89] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Pending
	I0916 23:57:54.383980  804231 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:54.383991  804231 system_pods.go:126] duration metric: took 4.985254ms to wait for k8s-apps to be running ...
	I0916 23:57:54.384002  804231 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:54.384056  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:54.395540  804231 system_svc.go:56] duration metric: took 11.532177ms WaitForService to wait for kubelet
	I0916 23:57:54.395557  804231 kubeadm.go:578] duration metric: took 2.168909422s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:54.395577  804231 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:54.398165  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398183  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398194  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398197  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398201  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398205  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398209  804231 node_conditions.go:105] duration metric: took 2.627179ms to run NodePressure ...
	I0916 23:57:54.398219  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:57:54.398248  804231 start.go:255] writing updated cluster config ...
	I0916 23:57:54.398554  804231 ssh_runner.go:195] Run: rm -f paused
	I0916 23:57:54.402187  804231 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:57:54.402686  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:57:54.405144  804231 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c94hz" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.409401  804231 pod_ready.go:94] pod "coredns-66bc5c9577-c94hz" is "Ready"
	I0916 23:57:54.409438  804231 pod_ready.go:86] duration metric: took 4.271645ms for pod "coredns-66bc5c9577-c94hz" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.409448  804231 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qn8m7" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.413536  804231 pod_ready.go:94] pod "coredns-66bc5c9577-qn8m7" is "Ready"
	I0916 23:57:54.413553  804231 pod_ready.go:86] duration metric: took 4.095453ms for pod "coredns-66bc5c9577-qn8m7" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.415699  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.419599  804231 pod_ready.go:94] pod "etcd-ha-472903" is "Ready"
	I0916 23:57:54.419618  804231 pod_ready.go:86] duration metric: took 3.899664ms for pod "etcd-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.419627  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.423363  804231 pod_ready.go:94] pod "etcd-ha-472903-m02" is "Ready"
	I0916 23:57:54.423380  804231 pod_ready.go:86] duration metric: took 3.746731ms for pod "etcd-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.423386  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.603706  804231 request.go:683] "Waited before sending request" delay="180.227617ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-472903-m03"
	I0916 23:57:54.803902  804231 request.go:683] "Waited before sending request" delay="197.349252ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:55.003954  804231 request.go:683] "Waited before sending request" delay="80.206914ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-472903-m03"
	I0916 23:57:55.203362  804231 request.go:683] "Waited before sending request" delay="196.197515ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:55.206052  804231 pod_ready.go:94] pod "etcd-ha-472903-m03" is "Ready"
	I0916 23:57:55.206075  804231 pod_ready.go:86] duration metric: took 782.683771ms for pod "etcd-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.403450  804231 request.go:683] "Waited before sending request" delay="197.254129ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0916 23:57:55.406629  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.604081  804231 request.go:683] "Waited before sending request" delay="197.327981ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903"
	I0916 23:57:55.803277  804231 request.go:683] "Waited before sending request" delay="196.28238ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:55.806023  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903" is "Ready"
	I0916 23:57:55.806053  804231 pod_ready.go:86] duration metric: took 399.400731ms for pod "kube-apiserver-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.806064  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.003360  804231 request.go:683] "Waited before sending request" delay="197.181089ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903-m02"
	I0916 23:57:56.203591  804231 request.go:683] "Waited before sending request" delay="197.334062ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:56.206593  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903-m02" is "Ready"
	I0916 23:57:56.206619  804231 pod_ready.go:86] duration metric: took 400.548564ms for pod "kube-apiserver-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.206627  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.404053  804231 request.go:683] "Waited before sending request" delay="197.330591ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903-m03"
	I0916 23:57:56.603366  804231 request.go:683] "Waited before sending request" delay="196.334008ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:56.606216  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903-m03" is "Ready"
	I0916 23:57:56.606240  804231 pod_ready.go:86] duration metric: took 399.60823ms for pod "kube-apiserver-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.803696  804231 request.go:683] "Waited before sending request" delay="197.341894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0916 23:57:56.806878  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.003237  804231 request.go:683] "Waited before sending request" delay="196.261492ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903"
	I0916 23:57:57.203189  804231 request.go:683] "Waited before sending request" delay="197.16206ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:57.205847  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903" is "Ready"
	I0916 23:57:57.205870  804231 pod_ready.go:86] duration metric: took 398.97003ms for pod "kube-controller-manager-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.205878  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.403223  804231 request.go:683] "Waited before sending request" delay="197.233762ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903-m02"
	I0916 23:57:57.603503  804231 request.go:683] "Waited before sending request" delay="197.308924ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:57.606309  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903-m02" is "Ready"
	I0916 23:57:57.606331  804231 pod_ready.go:86] duration metric: took 400.447455ms for pod "kube-controller-manager-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.606339  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.803572  804231 request.go:683] "Waited before sending request" delay="197.156861ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903-m03"
	I0916 23:57:58.003564  804231 request.go:683] "Waited before sending request" delay="197.308739ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:58.006495  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903-m03" is "Ready"
	I0916 23:57:58.006527  804231 pod_ready.go:86] duration metric: took 400.177209ms for pod "kube-controller-manager-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.203971  804231 request.go:683] "Waited before sending request" delay="197.330656ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0916 23:57:58.207087  804231 pod_ready.go:83] waiting for pod "kube-proxy-58lkb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.403484  804231 request.go:683] "Waited before sending request" delay="196.298118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-58lkb"
	I0916 23:57:58.603727  804231 request.go:683] "Waited before sending request" delay="197.238459ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:58.606561  804231 pod_ready.go:94] pod "kube-proxy-58lkb" is "Ready"
	I0916 23:57:58.606586  804231 pod_ready.go:86] duration metric: took 399.476011ms for pod "kube-proxy-58lkb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.606593  804231 pod_ready.go:83] waiting for pod "kube-proxy-d4m8f" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.804003  804231 request.go:683] "Waited before sending request" delay="197.323847ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d4m8f"
	I0916 23:57:59.003937  804231 request.go:683] "Waited before sending request" delay="197.340178ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:59.006899  804231 pod_ready.go:94] pod "kube-proxy-d4m8f" is "Ready"
	I0916 23:57:59.006927  804231 pod_ready.go:86] duration metric: took 400.327971ms for pod "kube-proxy-d4m8f" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:59.006938  804231 pod_ready.go:83] waiting for pod "kube-proxy-kn6nb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:59.203366  804231 request.go:683] "Waited before sending request" delay="196.341882ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kn6nb"
	I0916 23:57:59.403608  804231 request.go:683] "Waited before sending request" delay="197.193431ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:59.604047  804231 request.go:683] "Waited before sending request" delay="96.244025ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kn6nb"
	I0916 23:57:59.803112  804231 request.go:683] "Waited before sending request" delay="196.282766ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:58:00.203120  804231 request.go:683] "Waited before sending request" delay="192.276334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:58:00.603459  804231 request.go:683] "Waited before sending request" delay="93.218157ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	W0916 23:58:01.014543  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:03.512871  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:06.012965  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:08.512763  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:11.012966  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:13.013166  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:15.512655  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:18.012615  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:20.513188  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:23.012908  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:25.013240  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:27.512733  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:30.012142  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:32.012503  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:34.013070  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:36.512643  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	I0916 23:58:37.014670  804231 pod_ready.go:94] pod "kube-proxy-kn6nb" is "Ready"
	I0916 23:58:37.014697  804231 pod_ready.go:86] duration metric: took 38.007753603s for pod "kube-proxy-kn6nb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.017732  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.022228  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903" is "Ready"
	I0916 23:58:37.022246  804231 pod_ready.go:86] duration metric: took 4.488553ms for pod "kube-scheduler-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.022253  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.026173  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903-m02" is "Ready"
	I0916 23:58:37.026191  804231 pod_ready.go:86] duration metric: took 3.932068ms for pod "kube-scheduler-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.026198  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.030029  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903-m03" is "Ready"
	I0916 23:58:37.030046  804231 pod_ready.go:86] duration metric: took 3.843487ms for pod "kube-scheduler-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.030054  804231 pod_ready.go:40] duration metric: took 42.627839542s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:58:37.073472  804231 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0916 23:58:37.074923  804231 out.go:179] * Done! kubectl is now configured to use "ha-472903" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0a41d8b587e02       8c811b4aec35f       12 minutes ago      Running             busybox                   0                   a2422ee3e6e6d       busybox-7b57f96db7-6hrm6
	f33de265effb1       6e38f40d628db       14 minutes ago      Running             storage-provisioner       1                   1c0713f862ea0       storage-provisioner
	9f103b05d2d6f       52546a367cc9e       14 minutes ago      Running             coredns                   0                   9579263342827       coredns-66bc5c9577-c94hz
	3b457407f10e3       52546a367cc9e       14 minutes ago      Running             coredns                   0                   290cfb537788e       coredns-66bc5c9577-qn8m7
	cc69d2451cb65       409467f978b4a       14 minutes ago      Running             kindnet-cni               0                   3e17d6ae9b2a6       kindnet-lh7dv
	f4767b6363ce9       6e38f40d628db       14 minutes ago      Exited              storage-provisioner       0                   1c0713f862ea0       storage-provisioner
	92dd4d116eb03       df0860106674d       14 minutes ago      Running             kube-proxy                0                   8c0ecd5301326       kube-proxy-d4m8f
	3cb75495f7a54       765655ea60781       14 minutes ago      Running             kube-vip                  0                   4c425da29992d       kube-vip-ha-472903
	bba28cace6502       46169d968e920       14 minutes ago      Running             kube-scheduler            0                   f18dd7697c60f       kube-scheduler-ha-472903
	087290a41f59c       a0af72f2ec6d6       14 minutes ago      Running             kube-controller-manager   0                   0760ebe1d2a56       kube-controller-manager-ha-472903
	0aba62132d764       90550c43ad2bc       14 minutes ago      Running             kube-apiserver            0                   8ad1fa8bc0267       kube-apiserver-ha-472903
	23c0af0bdbe95       5f1f5298c888d       14 minutes ago      Running             etcd                      0                   b01a62742caec       etcd-ha-472903
	
	
	==> containerd <==
	Sep 16 23:57:20 ha-472903 containerd[765]: time="2025-09-16T23:57:20.857383931Z" level=info msg="StartContainer for \"9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315\""
	Sep 16 23:57:20 ha-472903 containerd[765]: time="2025-09-16T23:57:20.915209442Z" level=info msg="StartContainer for \"9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315\" returns successfully"
	Sep 16 23:57:26 ha-472903 containerd[765]: time="2025-09-16T23:57:26.847849669Z" level=info msg="received exit event container_id:\"f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8\"  id:\"f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8\"  pid:2188  exit_status:1  exited_at:{seconds:1758067046  nanos:847300745}"
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084468964Z" level=info msg="shim disconnected" id=f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8 namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084514637Z" level=warning msg="cleaning up after shim disconnected" id=f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8 namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084528446Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.861023305Z" level=info msg="CreateContainer within sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.875038922Z" level=info msg="CreateContainer within sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\""
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.875884762Z" level=info msg="StartContainer for \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\""
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.929708067Z" level=info msg="StartContainer for \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\" returns successfully"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.362974621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-6hrm6,Uid:bd03bad4-af1e-42d0-81fb-6fcaeaa8775e,Namespace:default,Attempt:0,}"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.455106923Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.455480779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-6hrm6,Uid:bd03bad4-af1e-42d0-81fb-6fcaeaa8775e,Namespace:default,Attempt:0,} returns sandbox id \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\""
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.457290181Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.440332779Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.440968214Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.442025332Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.443719507Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.444221405Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 1.986887608s"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.444254598Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.447875079Z" level=info msg="CreateContainer within sandbox \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.457018566Z" level=info msg="CreateContainer within sandbox \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.457508138Z" level=info msg="StartContainer for \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.510633374Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.512731136Z" level=info msg="StartContainer for \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\" returns successfully"
	
	
	==> coredns [3b457407f10e357ce33da7fa3fb4333f8312f0d3e3570cf8528cdcac8f5a1d0f] <==
	[INFO] 10.244.1.2:57899 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.012540337s
	[INFO] 10.244.1.2:54323 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.008980197s
	[INFO] 10.244.1.2:53799 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.009949044s
	[INFO] 10.244.0.4:39485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157098s
	[INFO] 10.244.0.4:57871 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000750185s
	[INFO] 10.244.0.4:53410 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000089028s
	[INFO] 10.244.1.2:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150317s
	[INFO] 10.244.1.2:59346 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028128363s
	[INFO] 10.244.1.2:43091 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01004668s
	[INFO] 10.244.1.2:37227 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000191819s
	[INFO] 10.244.1.2:40079 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125376s
	[INFO] 10.244.0.4:38168 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181114s
	[INFO] 10.244.0.4:60067 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000087147s
	[INFO] 10.244.0.4:47611 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122939s
	[INFO] 10.244.0.4:37626 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121195s
	[INFO] 10.244.1.2:42817 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159509s
	[INFO] 10.244.1.2:33910 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186538s
	[INFO] 10.244.1.2:37929 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109836s
	[INFO] 10.244.0.4:50698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212263s
	[INFO] 10.244.0.4:33166 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100167s
	[INFO] 10.244.1.2:50377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157558s
	[INFO] 10.244.1.2:39491 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132025s
	[INFO] 10.244.1.2:50075 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112028s
	[INFO] 10.244.0.4:58743 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149175s
	[INFO] 10.244.0.4:52796 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114946s
	
	
	==> coredns [9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45239 - 14115 "HINFO IN 5883645869461503498.3950535614037284853. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058516241s
	[INFO] 10.244.1.2:55352 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003252862s
	[INFO] 10.244.0.4:33650 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001640931s
	[INFO] 10.244.0.4:50077 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000621363s
	[INFO] 10.244.1.2:48439 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189187s
	[INFO] 10.244.1.2:39582 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151327s
	[INFO] 10.244.1.2:59539 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140715s
	[INFO] 10.244.0.4:42999 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177514s
	[INFO] 10.244.0.4:36769 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010694753s
	[INFO] 10.244.0.4:53074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158932s
	[INFO] 10.244.0.4:57223 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012213s
	[INFO] 10.244.1.2:50810 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176678s
	[INFO] 10.244.0.4:58045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142445s
	[INFO] 10.244.0.4:39777 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123555s
	[INFO] 10.244.1.2:59022 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148853s
	[INFO] 10.244.0.4:45136 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001657s
	[INFO] 10.244.0.4:37711 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134332s
	
	
	==> describe nodes <==
	Name:               ha-472903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_56_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:56:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:11:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-472903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac22e2ab5b0349cdb9474983aa23278e
	  System UUID:                695af4c7-28fb-4299-9454-75db3262ca2c
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6hrm6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-c94hz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 coredns-66bc5c9577-qn8m7             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 etcd-ha-472903                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kindnet-lh7dv                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-472903             250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-472903    200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-d4m8f                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-472903             100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-472903                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	
	
	Name:               ha-472903-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:11:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-472903-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 4094672df3d84509ae4c88c54f7f5e93
	  System UUID:                85df9db8-f21a-4038-9f8c-4cc1d81dc0d5
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-4jfjt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-472903-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kindnet-q7c7s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-472903-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-472903-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-58lkb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-472903-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-472903-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        14m   kube-proxy       
	  Normal  RegisteredNode  14m   node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode  14m   node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	
	
	Name:               ha-472903-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:11:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-472903-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 9964c713c65f4333be8a877aab744040
	  System UUID:                7eb7f2ee-a32d-4876-a4ad-58f745b9c377
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-mknzs                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-472903-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-x6twd                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-472903-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-472903-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-kn6nb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-472903-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-472903-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 e8 75 4b 01 57 08 06
	[  +0.025562] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[ +13.150028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 5c f0 26 cd ba 08 06
	[  +0.000341] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 20 90 fb f5 d8 08 06
	[ +28.639349] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 26 63 8d db 90 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[  +0.836892] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 cc 9b 52 38 94 08 06
	[  +0.080327] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	[Sep16 23:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[ +20.325550] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 39 4b 41 df 63 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[  +8.925776] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e cd c1 f7 dc c8 08 06
	[  +0.000373] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	
	
	==> etcd [23c0af0bdbe9526d53769461ed9f80d8c743b02e625b65cce39c888f5e7d4b4e] <==
	{"level":"info","ts":"2025-09-16T23:57:38.321619Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"ab9d0391dce79465","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-16T23:57:38.321647Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.321659Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.321995Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.324746Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"ab9d0391dce79465","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-09-16T23:57:38.324782Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.324796Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-16T23:57:38.539376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:45372","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:57:38.542781Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(4226730353838347643 12366044076840555621 12593026477526642892)"}
	{"level":"info","ts":"2025-09-16T23:57:38.542928Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.542988Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:40.311787Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"3aa85cdcd5e5557b","bytes":876533,"size":"876 kB","took":"30.009467109s"}
	{"level":"info","ts":"2025-09-16T23:57:47.400606Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:57:51.874557Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:06.103123Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:08.299219Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"ab9d0391dce79465","bytes":1356737,"size":"1.4 MB","took":"30.011071692s"}
	{"level":"info","ts":"2025-09-17T00:06:46.502551Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1554}
	{"level":"info","ts":"2025-09-17T00:06:46.523688Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1554,"took":"20.616779ms","hash":4277915431,"current-db-size-bytes":3936256,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2105344,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-09-17T00:06:46.523839Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4277915431,"revision":1554,"compact-revision":-1}
	{"level":"info","ts":"2025-09-17T00:10:51.037991Z","caller":"traceutil/trace.go:172","msg":"trace[1596502853] transaction","detail":"{read_only:false; response_revision:2892; number_of_response:1; }","duration":"106.292545ms","start":"2025-09-17T00:10:50.931676Z","end":"2025-09-17T00:10:51.037969Z","steps":["trace[1596502853] 'process raft request'  (duration: 106.163029ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:10:52.331973Z","caller":"traceutil/trace.go:172","msg":"trace[583569919] transaction","detail":"{read_only:false; response_revision:2894; number_of_response:1; }","duration":"112.232554ms","start":"2025-09-17T00:10:52.219723Z","end":"2025-09-17T00:10:52.331956Z","steps":["trace[583569919] 'process raft request'  (duration: 112.100203ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:11:09.266390Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.274935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:602"}
	{"level":"info","ts":"2025-09-17T00:11:09.266493Z","caller":"traceutil/trace.go:172","msg":"trace[316861325] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2934; }","duration":"165.393135ms","start":"2025-09-17T00:11:09.101086Z","end":"2025-09-17T00:11:09.266479Z","steps":["trace[316861325] 'range keys from in-memory index tree'  (duration: 164.766592ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:11:09.393171Z","caller":"traceutil/trace.go:172","msg":"trace[484529161] transaction","detail":"{read_only:false; response_revision:2935; number_of_response:1; }","duration":"123.717206ms","start":"2025-09-17T00:11:09.269439Z","end":"2025-09-17T00:11:09.393156Z","steps":["trace[484529161] 'process raft request'  (duration: 123.599826ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:11:09.634612Z","caller":"traceutil/trace.go:172","msg":"trace[1840342263] transaction","detail":"{read_only:false; response_revision:2936; number_of_response:1; }","duration":"177.817508ms","start":"2025-09-17T00:11:09.456780Z","end":"2025-09-17T00:11:09.634597Z","steps":["trace[1840342263] 'process raft request'  (duration: 177.726281ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:11:33 up  2:53,  0 users,  load average: 0.72, 0.49, 0.83
	Linux ha-472903 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [cc69d2451cb65860b5bc78e027be2fc1cb0f9fa6542b4abe3bc1ff1c90a8fe60] <==
	I0917 00:10:47.509093       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:10:57.504295       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:10:57.504328       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:10:57.504535       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:57.504555       1 main.go:301] handling current node
	I0917 00:10:57.504571       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:10:57.504577       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:11:07.510900       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:11:07.510941       1 main.go:301] handling current node
	I0917 00:11:07.510955       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:11:07.510960       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:11:07.512207       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:11:07.512233       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:11:17.511478       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:11:17.511518       1 main.go:301] handling current node
	I0917 00:11:17.511535       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:11:17.511540       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:11:17.511701       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:11:17.511709       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:11:27.503508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:11:27.503550       1 main.go:301] handling current node
	I0917 00:11:27.503570       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:11:27.503577       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:11:27.503775       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:11:27.503786       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0aba62132d764965d8e1a80a4a6345bb7e34892b23143da4a7af3450cd465d6c] <==
	I0917 00:06:06.800617       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:06:32.710262       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:06:47.441344       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:07:34.732036       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:07:42.022448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:08:46.236959       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:08:51.159386       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:52.603432       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:53.014406       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0917 00:10:41.954540       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37534: use of closed network connection
	E0917 00:10:42.122977       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37556: use of closed network connection
	E0917 00:10:42.250606       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37572: use of closed network connection
	E0917 00:10:42.442469       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37584: use of closed network connection
	E0917 00:10:42.605380       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37602: use of closed network connection
	E0917 00:10:42.730284       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37612: use of closed network connection
	E0917 00:10:42.884291       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37626: use of closed network connection
	E0917 00:10:43.036952       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37644: use of closed network connection
	E0917 00:10:43.161098       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37658: use of closed network connection
	E0917 00:10:45.408563       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37722: use of closed network connection
	E0917 00:10:45.568465       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37752: use of closed network connection
	E0917 00:10:45.727267       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37770: use of closed network connection
	E0917 00:10:45.883182       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37790: use of closed network connection
	E0917 00:10:46.004301       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37814: use of closed network connection
	I0917 00:10:57.282648       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:10:57.462257       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [087290a41f59caa4f9bc89759bcec6cf90f47c8a2ab83b7c671a8fff35641df9] <==
	I0916 23:56:54.728442       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0916 23:56:54.728466       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:56:54.728485       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0916 23:56:54.728644       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0916 23:56:54.728665       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0916 23:56:54.728648       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0916 23:56:54.728914       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0916 23:56:54.730175       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0916 23:56:54.730201       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0916 23:56:54.732432       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:56:54.733452       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:56:54.735655       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:56:54.735714       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:56:54.735760       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:56:54.735767       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:56:54.735772       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:56:54.740680       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903" podCIDRs=["10.244.0.0/24"]
	I0916 23:56:54.749950       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:22.933124       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m02\" does not exist"
	I0916 23:57:22.943785       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:24.681339       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m02"
	I0916 23:57:51.749676       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m03\" does not exist"
	I0916 23:57:51.772476       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m03" podCIDRs=["10.244.2.0/24"]
	E0916 23:57:51.829801       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"3f5da9fc-6769-4ca8-a715-edeace44c646\", ResourceVersion:\"594\", Generation:1, CreationTimestamp:time.Date(2025, time.September, 16, 23, 56, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00222d0e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"
\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSourc
e)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0021ed7c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdcf8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtu
alDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.34.0\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00144a7e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Re
sourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Life
cycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0019549c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001900b18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ba1200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", Tole
rationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e14570)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001900b70)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailab
le:2, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:57:54.685322       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m03"
	
	
	==> kube-proxy [92dd4d116eb0387dded82fb32d35690ec2d00e3f5e7ac81bf7aea0c6814edd5e] <==
	I0916 23:56:56.831012       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:56:56.891635       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:56:56.991820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:56:56.991862       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:56:56.991952       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:56:57.015955       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:56:57.016001       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:56:57.021120       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:56:57.021457       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:56:57.021499       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:56:57.024872       1 config.go:200] "Starting service config controller"
	I0916 23:56:57.024892       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:56:57.024900       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:56:57.024909       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:56:57.024890       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:56:57.024917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:56:57.024937       1 config.go:309] "Starting node config controller"
	I0916 23:56:57.024942       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:56:57.125608       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:56:57.125691       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0916 23:56:57.125856       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:56:57.125902       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [bba28cace6502de93aa43db4fb51671581c5074990dea721d98d36d839734a67] <==
	E0916 23:56:48.619869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:56:48.649766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:56:48.673092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I0916 23:56:49.170967       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 23:57:51.780040       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:57:51.780142       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	E0916 23:57:51.780183       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	I0916 23:57:51.782132       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:58:37.948695       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	E0916 23:58:37.948846       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 565a634f-ab41-4776-ba5d-63a601bfec48(default/busybox-7b57f96db7-x6xc9) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	E0916 23:58:37.948875       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	I0916 23:58:37.950251       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	I0916 23:58:37.966099       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="47b06c15-c007-4c50-a248-5411a0f4b6a7" pod="default/busybox-7b57f96db7-4jfjt" assumedNode="ha-472903-m02" currentNode="ha-472903"
	E0916 23:58:37.968241       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903"
	E0916 23:58:37.968351       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 47b06c15-c007-4c50-a248-5411a0f4b6a7(default/busybox-7b57f96db7-4jfjt) was assumed on ha-472903 but assigned to ha-472903-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	E0916 23:58:37.968376       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	I0916 23:58:37.969472       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903-m02"
	E0916 23:58:38.002469       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-wp95z" node="ha-472903"
	E0916 23:58:38.002779       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:38.046394       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-xnrsc\" not found" pod="default/busybox-7b57f96db7-xnrsc"
	E0916 23:58:38.046880       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-wp95z\" not found" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:40.050124       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	E0916 23:58:40.050213       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod bd03bad4-af1e-42d0-81fb-6fcaeaa8775e(default/busybox-7b57f96db7-6hrm6) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	E0916 23:58:40.050248       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	I0916 23:58:40.051853       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	
	
	==> kubelet <==
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.235025    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62 podName:cc7a8d10-408f-4655-ac70-54b4af22d9eb nodeName:}" failed. No retries permitted until 2025-09-16 23:58:38.735007966 +0000 UTC m=+109.066439678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hrb62" (UniqueName: "kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62") pod "busybox-7b57f96db7-5pwbb" (UID: "cc7a8d10-408f-4655-ac70-54b4af22d9eb") : failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737179    1676 projected.go:196] Error preparing data for projected volume kube-api-access-xrpwc for pod default/busybox-7b57f96db7-xj7ks: failed to fetch token: pod "busybox-7b57f96db7-xj7ks" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737266    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc podName:cac915f6-7630-4320-b6d2-fd18f3c19a17 nodeName:}" failed. No retries permitted until 2025-09-16 23:58:39.737245356 +0000 UTC m=+110.068677057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xrpwc" (UniqueName: "kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc") pod "busybox-7b57f96db7-xj7ks" (UID: "cac915f6-7630-4320-b6d2-fd18f3c19a17") : failed to fetch token: pod "busybox-7b57f96db7-xj7ks" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737179    1676 projected.go:196] Error preparing data for projected volume kube-api-access-hrb62 for pod default/busybox-7b57f96db7-5pwbb: failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737371    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62 podName:cc7a8d10-408f-4655-ac70-54b4af22d9eb nodeName:}" failed. No retries permitted until 2025-09-16 23:58:39.737351933 +0000 UTC m=+110.068783647 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hrb62" (UniqueName: "kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62") pod "busybox-7b57f96db7-5pwbb" (UID: "cc7a8d10-408f-4655-ac70-54b4af22d9eb") : failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.027158    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.028111    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.039445    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.042381    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138755    1676 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9njqf\" (UniqueName: \"kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf\") pod \"59b9a23c-498d-4802-9790-70931c4a2c06\" (UID: \"59b9a23c-498d-4802-9790-70931c4a2c06\") "
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138821    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hrb62\" (UniqueName: \"kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138836    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xrpwc\" (UniqueName: \"kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.140952    1676 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf" (OuterVolumeSpecName: "kube-api-access-9njqf") pod "59b9a23c-498d-4802-9790-70931c4a2c06" (UID: "59b9a23c-498d-4802-9790-70931c4a2c06"). InnerVolumeSpecName "kube-api-access-9njqf". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.239025    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9njqf\" (UniqueName: \"kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.752137    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.753199    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.754268    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" path="/var/lib/kubelet/pods/cac915f6-7630-4320-b6d2-fd18f3c19a17/volumes"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.754475    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" path="/var/lib/kubelet/pods/cc7a8d10-408f-4655-ac70-54b4af22d9eb/volumes"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.056772    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.057611    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.059208    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.060512    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: I0916 23:58:40.145054    1676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjkrp\" (UniqueName: \"kubernetes.io/projected/bd03bad4-af1e-42d0-81fb-6fcaeaa8775e-kube-api-access-pjkrp\") pod \"busybox-7b57f96db7-6hrm6\" (UID: \"bd03bad4-af1e-42d0-81fb-6fcaeaa8775e\") " pod="default/busybox-7b57f96db7-6hrm6"
	Sep 16 23:58:41 ha-472903 kubelet[1676]: I0916 23:58:41.754549    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59b9a23c-498d-4802-9790-70931c4a2c06" path="/var/lib/kubelet/pods/59b9a23c-498d-4802-9790-70931c4a2c06/volumes"
	Sep 16 23:58:43 ha-472903 kubelet[1676]: I0916 23:58:43.049200    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7b57f96db7-6hrm6" podStartSLOduration=3.061025393 podStartE2EDuration="5.049179166s" podCreationTimestamp="2025-09-16 23:58:38 +0000 UTC" firstStartedPulling="2025-09-16 23:58:40.45690156 +0000 UTC m=+110.788333264" lastFinishedPulling="2025-09-16 23:58:42.445055322 +0000 UTC m=+112.776487037" observedRunningTime="2025-09-16 23:58:43.049092106 +0000 UTC m=+113.380523828" watchObservedRunningTime="2025-09-16 23:58:43.049179166 +0000 UTC m=+113.380610888"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-472903 -n ha-472903
helpers_test.go:269: (dbg) Run:  kubectl --context ha-472903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-mknzs
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/CopyFile]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-472903 describe pod busybox-7b57f96db7-mknzs
helpers_test.go:290: (dbg) kubectl --context ha-472903 describe pod busybox-7b57f96db7-mknzs:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-mknzs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ha-472903-m03/192.168.49.4
	Start Time:       Tue, 16 Sep 2025 23:58:37 +0000
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmz92 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gmz92:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                  Age                   From               Message
	  ----     ------                  ----                  ----               -------
	  Warning  FailedScheduling        12m                   default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-mknzs": pod busybox-7b57f96db7-mknzs is already assigned to node "ha-472903-m03"
	  Warning  FailedScheduling        12m                   default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-mknzs": pod busybox-7b57f96db7-mknzs is already assigned to node "ha-472903-m03"
	  Normal   Scheduled               12m                   default-scheduler  Successfully assigned default/busybox-7b57f96db7-mknzs to ha-472903-m03
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "72439adc47052c2da00cee62587d780275cf6c2423dee9831567464d4725ee9d": failed to find network info for sandbox "72439adc47052c2da00cee62587d780275cf6c2423dee9831567464d4725ee9d"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "24ab8b6bd2f38653d2326c375fc81ebf17317e36885547c7b42c011bb95889ed": failed to find network info for sandbox "24ab8b6bd2f38653d2326c375fc81ebf17317e36885547c7b42c011bb95889ed"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "300fece4c100bc3e68a19e1fa6f46c8a378753727caaaeb1533dab71f234be58": failed to find network info for sandbox "300fece4c100bc3e68a19e1fa6f46c8a378753727caaaeb1533dab71f234be58"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e49a14b4de5e24fa450a43c124b2916ad7028d35cbc3b0f74595e68ee161d1d0": failed to find network info for sandbox "e49a14b4de5e24fa450a43c124b2916ad7028d35cbc3b0f74595e68ee161d1d0"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "efa290ca498f7c70ae29d8d97709edda97bc6b062aac05a3ef6d6a83fbd42797": failed to find network info for sandbox "efa290ca498f7c70ae29d8d97709edda97bc6b062aac05a3ef6d6a83fbd42797"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d5851ce1270b1c8994400ecd7bdabadaf895488957ffb5173dcd7e289db1de6c": failed to find network info for sandbox "d5851ce1270b1c8994400ecd7bdabadaf895488957ffb5173dcd7e289db1de6c"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "11aaa894ae434b08da8122c8f3445d03b4c1e54dfb071596f63a0e4654f49f10": failed to find network info for sandbox "11aaa894ae434b08da8122c8f3445d03b4c1e54dfb071596f63a0e4654f49f10"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c8126e80126ff891a4935c60cfec55753f6bb51d789c0eb46098b72267c7d53c": failed to find network info for sandbox "c8126e80126ff891a4935c60cfec55753f6bb51d789c0eb46098b72267c7d53c"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1389a2f92f350a6f495c76f80031300b6442a6a0cc67abd4b045ff9150b3fc3a": failed to find network info for sandbox "1389a2f92f350a6f495c76f80031300b6442a6a0cc67abd4b045ff9150b3fc3a"
	  Warning  FailedCreatePodSandBox  2m52s (x38 over 11m)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c3a9afe91461f3ea405980387ac5fab85785c7cf3f180d2b0f894e1df94ca62d": failed to find network info for sandbox "c3a9afe91461f3ea405980387ac5fab85785c7cf3f180d2b0f894e1df94ca62d"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (15.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-472903 node stop m02 --alsologtostderr -v 5: (11.930048656s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5: exit status 7 (537.444366ms)

                                                
                                                
-- stdout --
	ha-472903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-472903-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:11:46.055015  830664 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:11:46.055270  830664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:11:46.055281  830664 out.go:374] Setting ErrFile to fd 2...
	I0917 00:11:46.055286  830664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:11:46.055514  830664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:11:46.055685  830664 out.go:368] Setting JSON to false
	I0917 00:11:46.055704  830664 mustload.go:65] Loading cluster: ha-472903
	I0917 00:11:46.055815  830664 notify.go:220] Checking for updates...
	I0917 00:11:46.056156  830664 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:11:46.056191  830664 status.go:174] checking status of ha-472903 ...
	I0917 00:11:46.056628  830664 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:11:46.076849  830664 status.go:371] ha-472903 host status = "Running" (err=<nil>)
	I0917 00:11:46.076910  830664 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:11:46.077318  830664 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:11:46.095306  830664 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:11:46.095586  830664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:11:46.095634  830664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:11:46.113323  830664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:11:46.205737  830664 ssh_runner.go:195] Run: systemctl --version
	I0917 00:11:46.210478  830664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:11:46.222962  830664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:11:46.280646  830664 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:11:46.270666807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:11:46.281197  830664 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:11:46.281227  830664 api_server.go:166] Checking apiserver status ...
	I0917 00:11:46.281272  830664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:11:46.293671  830664 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup
	W0917 00:11:46.303464  830664 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:11:46.303517  830664 ssh_runner.go:195] Run: ls
	I0917 00:11:46.307114  830664 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:11:46.311049  830664 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:11:46.311070  830664 status.go:463] ha-472903 apiserver status = Running (err=<nil>)
	I0917 00:11:46.311080  830664 status.go:176] ha-472903 status: &{Name:ha-472903 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:11:46.311094  830664 status.go:174] checking status of ha-472903-m02 ...
	I0917 00:11:46.311326  830664 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:11:46.330330  830664 status.go:371] ha-472903-m02 host status = "Stopped" (err=<nil>)
	I0917 00:11:46.330354  830664 status.go:384] host is not running, skipping remaining checks
	I0917 00:11:46.330361  830664 status.go:176] ha-472903-m02 status: &{Name:ha-472903-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:11:46.330396  830664 status.go:174] checking status of ha-472903-m03 ...
	I0917 00:11:46.330673  830664 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:11:46.348951  830664 status.go:371] ha-472903-m03 host status = "Running" (err=<nil>)
	I0917 00:11:46.348977  830664 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:11:46.349242  830664 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:11:46.366855  830664 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:11:46.367130  830664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:11:46.367192  830664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:11:46.384562  830664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:11:46.477567  830664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:11:46.489670  830664 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:11:46.489697  830664 api_server.go:166] Checking apiserver status ...
	I0917 00:11:46.489732  830664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:11:46.501028  830664 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	W0917 00:11:46.512197  830664 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:11:46.512273  830664 ssh_runner.go:195] Run: ls
	I0917 00:11:46.516162  830664 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:11:46.521006  830664 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:11:46.521032  830664 status.go:463] ha-472903-m03 apiserver status = Running (err=<nil>)
	I0917 00:11:46.521045  830664 status.go:176] ha-472903-m03 status: &{Name:ha-472903-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:11:46.521067  830664 status.go:174] checking status of ha-472903-m04 ...
	I0917 00:11:46.521375  830664 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:11:46.540491  830664 status.go:371] ha-472903-m04 host status = "Stopped" (err=<nil>)
	I0917 00:11:46.540519  830664 status.go:384] host is not running, skipping remaining checks
	I0917 00:11:46.540526  830664 status.go:176] ha-472903-m04 status: &{Name:ha-472903-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5": ha-472903
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-472903-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-472903-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-472903-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5": ha-472903
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-472903-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-472903-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-472903-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-472903
helpers_test.go:243: (dbg) docker inspect ha-472903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	        "Created": "2025-09-16T23:56:35.178831158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 804802,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:56:35.209552026Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hostname",
	        "HostsPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hosts",
	        "LogPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047-json.log",
	        "Name": "/ha-472903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-472903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-472903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	                "LowerDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-472903",
	                "Source": "/var/lib/docker/volumes/ha-472903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-472903",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-472903",
	                "name.minikube.sigs.k8s.io": "ha-472903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "abe382ce28757e80b5cdae91a64217d3672b21c23f3517480bd53105aeca147e",
	            "SandboxKey": "/var/run/docker/netns/abe382ce2875",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33544"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33545"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33548"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33546"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33547"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-472903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:42:9f:f6:50:c2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "22d49b2f397dfabc2a3967bd54b05204a52976e683f65ff07bff00e793040bef",
	                    "EndpointID": "4d4d83129a167c8183e8ef58cc6057f613d8d69adf59710ba6c623d1ff2970c6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-472903",
	                        "05f03528ecc5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-472903 -n ha-472903
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-472903 logs -n 25: (1.116003901s)
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970888916/001/cp-test_ha-472903-m03.txt │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ cp      │ ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903:/home/docker/cp-test_ha-472903-m03_ha-472903.txt                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903 sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903.txt                                                 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ cp      │ ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903-m02:/home/docker/cp-test_ha-472903-m03_ha-472903-m02.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m02 sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m02.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ cp      │ ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903-m04:/home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp testdata/cp-test.txt ha-472903-m04:/home/docker/cp-test.txt                                                             │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970888916/001/cp-test_ha-472903-m04.txt │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903:/home/docker/cp-test_ha-472903-m04_ha-472903.txt                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903.txt                                                 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m02:/home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m02 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m03:/home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ node    │ ha-472903 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:56:30
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:56:30.301112  804231 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:56:30.301322  804231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:30.301330  804231 out.go:374] Setting ErrFile to fd 2...
	I0916 23:56:30.301335  804231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:30.301535  804231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0916 23:56:30.302024  804231 out.go:368] Setting JSON to false
	I0916 23:56:30.302925  804231 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9532,"bootTime":1758057458,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:56:30.303027  804231 start.go:140] virtualization: kvm guest
	I0916 23:56:30.304965  804231 out.go:179] * [ha-472903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:56:30.306181  804231 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:56:30.306189  804231 notify.go:220] Checking for updates...
	I0916 23:56:30.308309  804231 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:56:30.309530  804231 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0916 23:56:30.310577  804231 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0916 23:56:30.311523  804231 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:56:30.312490  804231 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:56:30.313634  804231 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:56:30.336203  804231 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:56:30.336330  804231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:30.390690  804231 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:56:30.380521507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:30.390801  804231 docker.go:318] overlay module found
	I0916 23:56:30.392435  804231 out.go:179] * Using the docker driver based on user configuration
	I0916 23:56:30.393493  804231 start.go:304] selected driver: docker
	I0916 23:56:30.393505  804231 start.go:918] validating driver "docker" against <nil>
	I0916 23:56:30.393517  804231 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:56:30.394092  804231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:30.448140  804231 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:56:30.438500908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:30.448302  804231 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:56:30.448529  804231 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:56:30.450143  804231 out.go:179] * Using Docker driver with root privileges
	I0916 23:56:30.451156  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:30.451216  804231 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 23:56:30.451226  804231 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:56:30.451301  804231 start.go:348] cluster config:
	{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m
0s}
	I0916 23:56:30.452491  804231 out.go:179] * Starting "ha-472903" primary control-plane node in "ha-472903" cluster
	I0916 23:56:30.453469  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:56:30.454617  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:30.455626  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:30.455658  804231 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0916 23:56:30.455669  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:30.455737  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:30.455747  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:30.455875  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:56:30.456208  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:30.456245  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json: {Name:mkb16495f6ef626fa58a9600f3b4a943b5aaf14d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:30.475568  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:30.475587  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:30.475611  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:30.475644  804231 start.go:360] acquireMachinesLock for ha-472903: {Name:mk994658ce3314f2aed1dec341debc49d36a4326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:30.475759  804231 start.go:364] duration metric: took 97.738µs to acquireMachinesLock for "ha-472903"
	I0916 23:56:30.475786  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:30.475881  804231 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:56:30.477680  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:30.477953  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:56:30.477986  804231 client.go:168] LocalClient.Create starting
	I0916 23:56:30.478060  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:56:30.478097  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:30.478118  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:30.478203  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:56:30.478234  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:30.478247  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:30.478706  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:56:30.494743  804231 cli_runner.go:211] docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:56:30.494806  804231 network_create.go:284] running [docker network inspect ha-472903] to gather additional debugging logs...
	I0916 23:56:30.494829  804231 cli_runner.go:164] Run: docker network inspect ha-472903
	W0916 23:56:30.510851  804231 cli_runner.go:211] docker network inspect ha-472903 returned with exit code 1
	I0916 23:56:30.510886  804231 network_create.go:287] error running [docker network inspect ha-472903]: docker network inspect ha-472903: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-472903 not found
	I0916 23:56:30.510919  804231 network_create.go:289] output of [docker network inspect ha-472903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-472903 not found
	
	** /stderr **
	I0916 23:56:30.511007  804231 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:30.527272  804231 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b12870}
	I0916 23:56:30.527312  804231 network_create.go:124] attempt to create docker network ha-472903 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:56:30.527357  804231 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-472903 ha-472903
	I0916 23:56:30.581246  804231 network_create.go:108] docker network ha-472903 192.168.49.0/24 created
	I0916 23:56:30.581278  804231 kic.go:121] calculated static IP "192.168.49.2" for the "ha-472903" container
	I0916 23:56:30.581331  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:30.597113  804231 cli_runner.go:164] Run: docker volume create ha-472903 --label name.minikube.sigs.k8s.io=ha-472903 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:30.614615  804231 oci.go:103] Successfully created a docker volume ha-472903
	I0916 23:56:30.614694  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903 --entrypoint /usr/bin/test -v ha-472903:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:30.983301  804231 oci.go:107] Successfully prepared a docker volume ha-472903
	I0916 23:56:30.983346  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:30.983369  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:30.983457  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:56:35.109877  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.126378793s)
	I0916 23:56:35.109930  804231 kic.go:203] duration metric: took 4.126557088s to extract preloaded images to volume ...
	W0916 23:56:35.110010  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:56:35.110041  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:56:35.110081  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:56:35.162423  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903 --name ha-472903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903 --network ha-472903 --ip 192.168.49.2 --volume ha-472903:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:56:35.411448  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Running}}
	I0916 23:56:35.428877  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.447492  804231 cli_runner.go:164] Run: docker exec ha-472903 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:56:35.490145  804231 oci.go:144] the created container "ha-472903" has a running status.
	I0916 23:56:35.490177  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa...
	I0916 23:56:35.748917  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:56:35.748974  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:56:35.776040  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.795374  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:56:35.795403  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:56:35.841194  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.859165  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:56:35.859278  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:35.877348  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:35.877637  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:35.877654  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:56:36.014327  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0916 23:56:36.014356  804231 ubuntu.go:182] provisioning hostname "ha-472903"
	I0916 23:56:36.014430  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.033295  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:36.033543  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:36.033558  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903 && echo "ha-472903" | sudo tee /etc/hostname
	I0916 23:56:36.178557  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0916 23:56:36.178627  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.196584  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:36.196791  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:36.196814  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:56:36.331895  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:56:36.331954  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:56:36.331987  804231 ubuntu.go:190] setting up certificates
	I0916 23:56:36.332000  804231 provision.go:84] configureAuth start
	I0916 23:56:36.332062  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.350923  804231 provision.go:143] copyHostCerts
	I0916 23:56:36.350968  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:56:36.351011  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:56:36.351021  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:56:36.351100  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:56:36.351216  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:56:36.351254  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:56:36.351265  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:56:36.351307  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:56:36.351374  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:56:36.351400  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:56:36.351409  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:56:36.351461  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:56:36.351538  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903 san=[127.0.0.1 192.168.49.2 ha-472903 localhost minikube]
	I0916 23:56:36.406870  804231 provision.go:177] copyRemoteCerts
	I0916 23:56:36.406927  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:56:36.406977  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.424064  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.520663  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:56:36.520737  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:56:36.546100  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:56:36.546162  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 23:56:36.569886  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:56:36.569946  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:56:36.593694  804231 provision.go:87] duration metric: took 261.676108ms to configureAuth
	I0916 23:56:36.593725  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:56:36.593891  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:36.593903  804231 machine.go:96] duration metric: took 734.71199ms to provisionDockerMachine
	I0916 23:56:36.593911  804231 client.go:171] duration metric: took 6.115914604s to LocalClient.Create
	I0916 23:56:36.593933  804231 start.go:167] duration metric: took 6.115991162s to libmachine.API.Create "ha-472903"
	I0916 23:56:36.593942  804231 start.go:293] postStartSetup for "ha-472903" (driver="docker")
	I0916 23:56:36.593950  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:56:36.593994  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:56:36.594038  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.611127  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.708294  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:56:36.711629  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:56:36.711662  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:56:36.711669  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:56:36.711677  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:56:36.711690  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:56:36.711734  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:56:36.711817  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:56:36.711829  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:56:36.711917  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:56:36.720521  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:56:36.746614  804231 start.go:296] duration metric: took 152.657806ms for postStartSetup
	I0916 23:56:36.746970  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.763912  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:36.764159  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:56:36.764204  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.781099  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.872372  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:56:36.876670  804231 start.go:128] duration metric: took 6.400768235s to createHost
	I0916 23:56:36.876701  804231 start.go:83] releasing machines lock for "ha-472903", held for 6.400928988s
	I0916 23:56:36.876787  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.894080  804231 ssh_runner.go:195] Run: cat /version.json
	I0916 23:56:36.894094  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:56:36.894141  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.894182  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.912628  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.913001  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:37.079386  804231 ssh_runner.go:195] Run: systemctl --version
	I0916 23:56:37.084104  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:56:37.088563  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:56:37.116786  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:56:37.116846  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:56:37.142716  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:56:37.142738  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:56:37.142772  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:56:37.142832  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:56:37.154693  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:56:37.165920  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:56:37.165978  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:56:37.179227  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:56:37.192751  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:56:37.255915  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:56:37.324761  804231 docker.go:234] disabling docker service ...
	I0916 23:56:37.324836  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:56:37.342233  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:56:37.353324  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:56:37.420555  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:56:37.486396  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:56:37.497453  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:56:37.513435  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:56:37.524399  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:56:37.534072  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:56:37.534132  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:56:37.543872  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:56:37.553478  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:56:37.562918  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:56:37.572431  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:56:37.581176  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:56:37.590540  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:56:37.599825  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:56:37.609340  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:56:37.617500  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:56:37.625771  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:56:37.685687  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:56:37.787201  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:56:37.787275  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:56:37.791126  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:56:37.791200  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:56:37.794684  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:56:37.828753  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:56:37.828806  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:56:37.851610  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:56:37.876577  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:56:37.877711  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:37.894044  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:56:37.897995  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:56:37.909702  804231 kubeadm.go:875] updating cluster {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:56:37.909830  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:37.909936  804231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:56:37.943964  804231 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 23:56:37.943985  804231 containerd.go:534] Images already preloaded, skipping extraction
	I0916 23:56:37.944040  804231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:56:37.976374  804231 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 23:56:37.976397  804231 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:56:37.976405  804231 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0916 23:56:37.976525  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:56:37.976590  804231 ssh_runner.go:195] Run: sudo crictl info
	I0916 23:56:38.009585  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:38.009608  804231 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:56:38.009620  804231 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:56:38.009642  804231 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-472903 NodeName:ha-472903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:56:38.009740  804231 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-472903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:56:38.009763  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:56:38.009799  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:56:38.022796  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:56:38.022978  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:56:38.023041  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:56:38.032162  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:56:38.032241  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 23:56:38.040936  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 23:56:38.058672  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:56:38.079097  804231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0916 23:56:38.097183  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0916 23:56:38.116629  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:56:38.120221  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:56:38.131205  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:56:38.195735  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:56:38.216649  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.2
	I0916 23:56:38.216671  804231 certs.go:194] generating shared ca certs ...
	I0916 23:56:38.216692  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.216854  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:56:38.216907  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:56:38.216920  804231 certs.go:256] generating profile certs ...
	I0916 23:56:38.216989  804231 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:56:38.217007  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt with IP's: []
	I0916 23:56:38.286683  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt ...
	I0916 23:56:38.286713  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt: {Name:mk764ef4ac73429cea14d799835f3822d8afb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.286876  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key ...
	I0916 23:56:38.286887  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key: {Name:mk988f40b7ad20c61b4ffc19afd15eea50787a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.286965  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8
	I0916 23:56:38.286981  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0916 23:56:38.411782  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 ...
	I0916 23:56:38.411812  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8: {Name:mkbca9fcc4cd73eb913b43ef67240975ba048601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.411977  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8 ...
	I0916 23:56:38.411990  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8: {Name:mk56f7fb29011c6372caaf96dfdbcab1b202e8b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.412061  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:56:38.412138  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:56:38.412190  804231 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:56:38.412204  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt with IP's: []
	I0916 23:56:38.735728  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt ...
	I0916 23:56:38.735759  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt: {Name:mke25602938652bbe51197bb8e5738dfc5dca50b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.735935  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key ...
	I0916 23:56:38.735947  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key: {Name:mkc7d616357a8be8181d43ca8cb33ab512ce94dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.736027  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:56:38.736044  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:56:38.736055  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:56:38.736068  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:56:38.736078  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:56:38.736090  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:56:38.736105  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:56:38.736115  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:56:38.736175  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:56:38.736210  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:56:38.736218  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:56:38.736242  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:56:38.736266  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:56:38.736284  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:56:38.736322  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:56:38.736347  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:56:38.736360  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:38.736372  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:56:38.736905  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:56:38.762142  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:56:38.786590  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:56:38.810694  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:56:38.834521  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 23:56:38.858677  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:56:38.881975  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:56:38.906146  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:56:38.929698  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:56:38.955154  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:56:38.978551  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:56:39.001782  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:56:39.019405  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:56:39.024868  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:56:39.034165  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.038348  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.038407  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.045172  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:56:39.054735  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:56:39.065180  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.068976  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.069038  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.075920  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:56:39.085838  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:56:39.095394  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.098966  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.099019  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.105643  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:56:39.114800  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:56:39.117988  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:56:39.118033  804231 kubeadm.go:392] StartCluster: {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:56:39.118097  804231 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 23:56:39.118132  804231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 23:56:39.154291  804231 cri.go:89] found id: ""
	I0916 23:56:39.154361  804231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:56:39.163485  804231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:56:39.172454  804231 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:56:39.172499  804231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:56:39.181066  804231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:56:39.181098  804231 kubeadm.go:157] found existing configuration files:
	
	I0916 23:56:39.181131  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:56:39.189824  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:56:39.189873  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:56:39.198165  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:56:39.206772  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:56:39.206819  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:56:39.215119  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:56:39.223660  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:56:39.223717  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:56:39.232099  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:56:39.240514  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:56:39.240559  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:56:39.248850  804231 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:56:39.285897  804231 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:56:39.285950  804231 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:56:39.300660  804231 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:56:39.300727  804231 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:56:39.300801  804231 kubeadm.go:310] OS: Linux
	I0916 23:56:39.300901  804231 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:56:39.300975  804231 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:56:39.301037  804231 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:56:39.301080  804231 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:56:39.301127  804231 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:56:39.301169  804231 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:56:39.301211  804231 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:56:39.301268  804231 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:56:39.351787  804231 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:56:39.351909  804231 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:56:39.351995  804231 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:56:39.358062  804231 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:56:39.360794  804231 out.go:252]   - Generating certificates and keys ...
	I0916 23:56:39.360906  804231 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:56:39.360984  804231 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:56:39.805287  804231 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:56:40.002708  804231 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:56:40.279763  804231 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:56:40.813028  804231 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:56:41.074848  804231 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:56:41.075343  804231 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-472903 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:56:41.124880  804231 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:56:41.125041  804231 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-472903 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:56:41.707716  804231 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:56:42.089212  804231 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:56:42.627038  804231 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:56:42.627119  804231 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:56:42.823901  804231 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:56:43.022989  804231 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:56:43.163778  804231 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:56:43.708743  804231 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:56:44.024642  804231 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:56:44.025130  804231 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:56:44.027319  804231 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:56:44.029599  804231 out.go:252]   - Booting up control plane ...
	I0916 23:56:44.029737  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:56:44.029842  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:56:44.030181  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:56:44.039957  804231 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:56:44.040118  804231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:56:44.047794  804231 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:56:44.048177  804231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:56:44.048269  804231 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:56:44.122629  804231 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:56:44.122739  804231 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:56:45.124352  804231 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001822735s
	I0916 23:56:45.127338  804231 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:56:45.127477  804231 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:56:45.127582  804231 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:56:45.127694  804231 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:56:47.478256  804231 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.350892202s
	I0916 23:56:47.717698  804231 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.590223043s
	I0916 23:56:49.129161  804231 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001748341s
	I0916 23:56:49.140036  804231 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:56:49.148779  804231 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:56:49.158010  804231 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:56:49.158279  804231 kubeadm.go:310] [mark-control-plane] Marking the node ha-472903 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:56:49.165085  804231 kubeadm.go:310] [bootstrap-token] Using token: 4apri1.yqe8ok7wc4ltba21
	I0916 23:56:49.166180  804231 out.go:252]   - Configuring RBAC rules ...
	I0916 23:56:49.166328  804231 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:56:49.169225  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:56:49.174527  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:56:49.176741  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:56:49.178892  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:56:49.181107  804231 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:56:49.534440  804231 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:56:49.948567  804231 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:56:50.534581  804231 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:56:50.535429  804231 kubeadm.go:310] 
	I0916 23:56:50.535529  804231 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:56:50.535542  804231 kubeadm.go:310] 
	I0916 23:56:50.535650  804231 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:56:50.535660  804231 kubeadm.go:310] 
	I0916 23:56:50.535696  804231 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:56:50.535801  804231 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:56:50.535858  804231 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:56:50.535872  804231 kubeadm.go:310] 
	I0916 23:56:50.535940  804231 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:56:50.535949  804231 kubeadm.go:310] 
	I0916 23:56:50.536027  804231 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:56:50.536037  804231 kubeadm.go:310] 
	I0916 23:56:50.536125  804231 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:56:50.536212  804231 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:56:50.536280  804231 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:56:50.536286  804231 kubeadm.go:310] 
	I0916 23:56:50.536356  804231 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:56:50.536441  804231 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:56:50.536448  804231 kubeadm.go:310] 
	I0916 23:56:50.536543  804231 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4apri1.yqe8ok7wc4ltba21 \
	I0916 23:56:50.536688  804231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 \
	I0916 23:56:50.536722  804231 kubeadm.go:310] 	--control-plane 
	I0916 23:56:50.536731  804231 kubeadm.go:310] 
	I0916 23:56:50.536842  804231 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:56:50.536857  804231 kubeadm.go:310] 
	I0916 23:56:50.536947  804231 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4apri1.yqe8ok7wc4ltba21 \
	I0916 23:56:50.537084  804231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 
	I0916 23:56:50.539097  804231 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:56:50.539238  804231 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:56:50.539264  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:50.539274  804231 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:56:50.540523  804231 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:56:50.541480  804231 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:56:50.545518  804231 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:56:50.545534  804231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:56:50.563251  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:56:50.762002  804231 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:56:50.762092  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:50.762127  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903 minikube.k8s.io/updated_at=2025_09_16T23_56_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=true
	I0916 23:56:50.771679  804231 ops.go:34] apiserver oom_adj: -16
	I0916 23:56:50.843646  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:51.344428  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:51.844440  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:52.344316  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:52.844594  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:53.343854  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:53.844615  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:54.344057  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:54.844066  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.344374  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.844478  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.927027  804231 kubeadm.go:1105] duration metric: took 5.165002596s to wait for elevateKubeSystemPrivileges
	I0916 23:56:55.927062  804231 kubeadm.go:394] duration metric: took 16.809033965s to StartCluster
	I0916 23:56:55.927081  804231 settings.go:142] acquiring lock: {Name:mk6c1a5bee23e141aad5180323c16c47ed580ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:55.927146  804231 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0916 23:56:55.927785  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:55.928026  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:56:55.928018  804231 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:55.928038  804231 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 23:56:55.928103  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:56:55.928121  804231 addons.go:69] Setting default-storageclass=true in profile "ha-472903"
	I0916 23:56:55.928148  804231 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-472903"
	I0916 23:56:55.928126  804231 addons.go:69] Setting storage-provisioner=true in profile "ha-472903"
	I0916 23:56:55.928222  804231 addons.go:238] Setting addon storage-provisioner=true in "ha-472903"
	I0916 23:56:55.928269  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:56:55.928296  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:55.928610  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.928740  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.954806  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:56:55.955519  804231 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0916 23:56:55.955545  804231 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0916 23:56:55.955543  804231 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0916 23:56:55.955553  804231 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 23:56:55.955611  804231 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0916 23:56:55.955620  804231 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 23:56:55.956096  804231 addons.go:238] Setting addon default-storageclass=true in "ha-472903"
	I0916 23:56:55.956145  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:56:55.956685  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.957279  804231 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:56:55.961536  804231 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:56:55.961557  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:56:55.961614  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:55.979896  804231 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:56:55.979925  804231 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:56:55.979985  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:55.982838  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:55.999402  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:56.011618  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:56:56.095355  804231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:56:56.110814  804231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:56:56.153646  804231 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:56:56.360175  804231 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0916 23:56:56.361116  804231 addons.go:514] duration metric: took 433.076562ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 23:56:56.361149  804231 start.go:246] waiting for cluster config update ...
	I0916 23:56:56.361163  804231 start.go:255] writing updated cluster config ...
	I0916 23:56:56.362407  804231 out.go:203] 
	I0916 23:56:56.363527  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:56.363621  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:56.364993  804231 out.go:179] * Starting "ha-472903-m02" control-plane node in "ha-472903" cluster
	I0916 23:56:56.365873  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:56:56.366751  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:56.367539  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:56.367556  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:56.367630  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:56.367646  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:56.367654  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:56:56.367711  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:56.386547  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:56.386565  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:56.386580  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:56.386607  804231 start.go:360] acquireMachinesLock for ha-472903-m02: {Name:mk81d8c73856cf84ceff1767a1681f3f3cdab773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:56.386700  804231 start.go:364] duration metric: took 70.184µs to acquireMachinesLock for "ha-472903-m02"
	I0916 23:56:56.386738  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:56.386824  804231 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 23:56:56.388402  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:56.388536  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:56:56.388563  804231 client.go:168] LocalClient.Create starting
	I0916 23:56:56.388626  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:56:56.388664  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:56.388687  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:56.388757  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:56:56.388789  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:56.388804  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:56.389042  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:56.404624  804231 network_create.go:77] Found existing network {name:ha-472903 subnet:0xc001d2d140 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:56:56.404653  804231 kic.go:121] calculated static IP "192.168.49.3" for the "ha-472903-m02" container
	I0916 23:56:56.404719  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:56.420231  804231 cli_runner.go:164] Run: docker volume create ha-472903-m02 --label name.minikube.sigs.k8s.io=ha-472903-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:56.436361  804231 oci.go:103] Successfully created a docker volume ha-472903-m02
	I0916 23:56:56.436430  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m02 --entrypoint /usr/bin/test -v ha-472903-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:56.943375  804231 oci.go:107] Successfully prepared a docker volume ha-472903-m02
	I0916 23:56:56.943427  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:56.943455  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:56.943528  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:01.091161  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.147592491s)
	I0916 23:57:01.091197  804231 kic.go:203] duration metric: took 4.147738136s to extract preloaded images to volume ...
	W0916 23:57:01.091312  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:01.091355  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:01.091403  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:01.142900  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903-m02 --name ha-472903-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903-m02 --network ha-472903 --ip 192.168.49.3 --volume ha-472903-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:01.378924  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Running}}
	I0916 23:57:01.396232  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.412927  804231 cli_runner.go:164] Run: docker exec ha-472903-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:01.469205  804231 oci.go:144] the created container "ha-472903-m02" has a running status.
	I0916 23:57:01.469235  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa...
	I0916 23:57:01.517570  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:01.517621  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:01.540818  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.560831  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:01.560858  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:01.615037  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.637921  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:01.638030  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.659741  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.660056  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.660078  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:01.800716  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0916 23:57:01.800749  804231 ubuntu.go:182] provisioning hostname "ha-472903-m02"
	I0916 23:57:01.800817  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.819791  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.820013  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.820030  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m02 && echo "ha-472903-m02" | sudo tee /etc/hostname
	I0916 23:57:01.967539  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0916 23:57:01.967631  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.987814  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.988031  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.988047  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:02.121536  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:02.121571  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:57:02.121588  804231 ubuntu.go:190] setting up certificates
	I0916 23:57:02.121602  804231 provision.go:84] configureAuth start
	I0916 23:57:02.121663  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.139056  804231 provision.go:143] copyHostCerts
	I0916 23:57:02.139098  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:02.139135  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:57:02.139147  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:02.139221  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:57:02.139329  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:02.139362  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:57:02.139372  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:02.139430  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:57:02.139521  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:02.139549  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:57:02.139559  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:02.139599  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:57:02.139690  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m02 san=[127.0.0.1 192.168.49.3 ha-472903-m02 localhost minikube]
	I0916 23:57:02.262354  804231 provision.go:177] copyRemoteCerts
	I0916 23:57:02.262428  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:02.262491  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.279792  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.375833  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:02.375903  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:02.400316  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:02.400373  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:02.422506  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:02.422550  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:57:02.445091  804231 provision.go:87] duration metric: took 323.464176ms to configureAuth
	I0916 23:57:02.445121  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:02.445295  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:02.445313  804231 machine.go:96] duration metric: took 807.372883ms to provisionDockerMachine
	I0916 23:57:02.445320  804231 client.go:171] duration metric: took 6.056751196s to LocalClient.Create
	I0916 23:57:02.445337  804231 start.go:167] duration metric: took 6.056804276s to libmachine.API.Create "ha-472903"
	I0916 23:57:02.445346  804231 start.go:293] postStartSetup for "ha-472903-m02" (driver="docker")
	I0916 23:57:02.445354  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:02.445402  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:02.445461  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.463550  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.559528  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:02.562755  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:02.562780  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:02.562787  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:02.562793  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:02.562803  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:57:02.562847  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:57:02.562920  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:57:02.562930  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:57:02.563018  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:02.571142  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:02.596466  804231 start.go:296] duration metric: took 151.106324ms for postStartSetup
	I0916 23:57:02.596768  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.613316  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:02.613561  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:02.613601  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.632056  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.723085  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:02.727430  804231 start.go:128] duration metric: took 6.340577447s to createHost
	I0916 23:57:02.727453  804231 start.go:83] releasing machines lock for "ha-472903-m02", held for 6.34073897s
	I0916 23:57:02.727519  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.746152  804231 out.go:179] * Found network options:
	I0916 23:57:02.747248  804231 out.go:179]   - NO_PROXY=192.168.49.2
	W0916 23:57:02.748187  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:02.748240  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:02.748311  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:02.748360  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.748367  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:02.748427  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.765286  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.766625  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.856922  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:02.936692  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:02.936761  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:02.961822  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:02.961845  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:57:02.961878  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:02.961919  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:57:02.973318  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:02.983927  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:57:02.983969  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:57:02.996091  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:57:03.009314  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:57:03.072565  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:57:03.140469  804231 docker.go:234] disabling docker service ...
	I0916 23:57:03.140526  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:57:03.157179  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:57:03.167955  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:57:03.233386  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:57:03.296537  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:03.307574  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:03.323754  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:03.334305  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:03.343767  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:03.343826  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:03.353029  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:03.361991  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:03.371206  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:03.380598  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:03.389216  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:03.398125  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:03.407145  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:03.416183  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:03.424123  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:03.432185  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:03.493561  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:03.591942  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:57:03.592010  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:57:03.595710  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:57:03.595768  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:57:03.599108  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:03.633181  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:57:03.633231  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:03.656364  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:03.680150  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:57:03.681177  804231 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:03.682053  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:03.699720  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:03.703306  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:03.714275  804231 mustload.go:65] Loading cluster: ha-472903
	I0916 23:57:03.714452  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:03.714650  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:57:03.730631  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:03.730849  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.3
	I0916 23:57:03.730859  804231 certs.go:194] generating shared ca certs ...
	I0916 23:57:03.730877  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.730987  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:57:03.731023  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:57:03.731032  804231 certs.go:256] generating profile certs ...
	I0916 23:57:03.731092  804231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:57:03.731114  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a
	I0916 23:57:03.731125  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 23:57:03.830248  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a ...
	I0916 23:57:03.830275  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a: {Name:mk3e97859392ca0d50685e4c31c19acd3c590753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.830438  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a ...
	I0916 23:57:03.830453  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a: {Name:mkd3ec6288ef831df369d4ec39839c410f5116ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.830530  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:57:03.830653  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:57:03.830779  804231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:57:03.830794  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:03.830809  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:03.830823  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:03.830836  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:03.830846  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:03.830855  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:03.830864  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:03.830873  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:03.830920  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:57:03.830952  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:03.830962  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:03.830981  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:03.831001  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:03.831021  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:57:03.831058  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:03.831081  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:57:03.831094  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:57:03.831107  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:03.831156  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:03.847964  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:03.934599  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:03.938331  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:03.950286  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:03.953541  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:03.965169  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:03.968351  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:03.979814  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:03.982969  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:03.993972  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:03.997171  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:04.008607  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:04.011687  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 23:57:04.023019  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:04.046509  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:04.069781  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:04.092702  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:04.114933  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 23:57:04.137173  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0916 23:57:04.159280  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:04.181367  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:04.203980  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:57:04.230248  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:57:04.253628  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:04.276223  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:04.293552  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:04.309978  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:04.326237  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:04.342704  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:04.359099  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 23:57:04.375242  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:04.391611  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:57:04.396637  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:57:04.405389  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.408604  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.408651  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.414862  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:04.423583  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:57:04.432421  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.435706  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.435752  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.441863  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:04.450595  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:04.459588  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.462866  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.462907  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.469279  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:04.478135  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:04.481236  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:04.481288  804231 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0916 23:57:04.481383  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:04.481425  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:04.481462  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:04.492937  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:04.492999  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:04.493041  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:04.501084  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:04.501123  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:04.509217  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 23:57:04.525587  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:04.544042  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:04.561542  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:04.564725  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:04.574819  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:04.638378  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:04.659569  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:04.659878  804231 start.go:317] joinCluster: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:04.659986  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:04.660033  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:04.678136  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:04.817608  804231 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:04.817663  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 79akng.11lpa8n1ba4yh5m1 --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0916 23:57:23.327384  804231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 79akng.11lpa8n1ba4yh5m1 --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.509693377s)
	I0916 23:57:23.327447  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:23.521334  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903-m02 minikube.k8s.io/updated_at=2025_09_16T23_57_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=false
	I0916 23:57:23.592991  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-472903-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:23.664899  804231 start.go:319] duration metric: took 19.005017018s to joinCluster
	I0916 23:57:23.664975  804231 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:23.665223  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:23.665877  804231 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:23.666680  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:23.766393  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:23.779164  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:23.779228  804231 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:23.779511  804231 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m02" to be "Ready" ...
	I0916 23:57:24.283593  804231 node_ready.go:49] node "ha-472903-m02" is "Ready"
	I0916 23:57:24.283628  804231 node_ready.go:38] duration metric: took 504.097895ms for node "ha-472903-m02" to be "Ready" ...
	I0916 23:57:24.283648  804231 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:24.283699  804231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:24.295735  804231 api_server.go:72] duration metric: took 630.723924ms to wait for apiserver process to appear ...
	I0916 23:57:24.295758  804231 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:24.295774  804231 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:24.299650  804231 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:24.300537  804231 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:24.300558  804231 api_server.go:131] duration metric: took 4.795429ms to wait for apiserver health ...
	I0916 23:57:24.300566  804231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:24.304572  804231 system_pods.go:59] 19 kube-system pods found
	I0916 23:57:24.304598  804231 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:24.304604  804231 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:24.304608  804231 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:24.304611  804231 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Pending
	I0916 23:57:24.304615  804231 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:24.304621  804231 system_pods.go:61] "kindnet-mwf8l" [8c9533d3-defe-487b-a9b4-0502fb8f2d2a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mwf8l": pod kindnet-mwf8l is being deleted, cannot be assigned to a host)
	I0916 23:57:24.304628  804231 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-q7c7s": pod kindnet-q7c7s is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304639  804231 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:24.304643  804231 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Pending
	I0916 23:57:24.304646  804231 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:24.304650  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Pending
	I0916 23:57:24.304657  804231 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-58lkb": pod kube-proxy-58lkb is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304662  804231 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:24.304666  804231 system_pods.go:61] "kube-proxy-mf26q" [34502b32-75c1-4078-abd2-4e4d625252d8] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-mf26q": pod kube-proxy-mf26q is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304670  804231 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:24.304677  804231 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Pending
	I0916 23:57:24.304679  804231 system_pods.go:61] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:24.304682  804231 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Pending
	I0916 23:57:24.304687  804231 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:24.304694  804231 system_pods.go:74] duration metric: took 4.122792ms to wait for pod list to return data ...
	I0916 23:57:24.304700  804231 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:24.307165  804231 default_sa.go:45] found service account: "default"
	I0916 23:57:24.307183  804231 default_sa.go:55] duration metric: took 2.474442ms for default service account to be created ...
	I0916 23:57:24.307190  804231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:24.310491  804231 system_pods.go:86] 19 kube-system pods found
	I0916 23:57:24.310512  804231 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:24.310517  804231 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:24.310520  804231 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:24.310524  804231 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Pending
	I0916 23:57:24.310527  804231 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:24.310532  804231 system_pods.go:89] "kindnet-mwf8l" [8c9533d3-defe-487b-a9b4-0502fb8f2d2a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mwf8l": pod kindnet-mwf8l is being deleted, cannot be assigned to a host)
	I0916 23:57:24.310556  804231 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-q7c7s": pod kindnet-q7c7s is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310566  804231 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:24.310571  804231 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Pending
	I0916 23:57:24.310576  804231 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:24.310580  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Pending
	I0916 23:57:24.310588  804231 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-58lkb": pod kube-proxy-58lkb is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310591  804231 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:24.310596  804231 system_pods.go:89] "kube-proxy-mf26q" [34502b32-75c1-4078-abd2-4e4d625252d8] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-mf26q": pod kube-proxy-mf26q is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310600  804231 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:24.310603  804231 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Pending
	I0916 23:57:24.310608  804231 system_pods.go:89] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:24.310611  804231 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Pending
	I0916 23:57:24.310614  804231 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:24.310621  804231 system_pods.go:126] duration metric: took 3.426124ms to wait for k8s-apps to be running ...
	I0916 23:57:24.310629  804231 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:24.310666  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:24.322152  804231 system_svc.go:56] duration metric: took 11.515834ms WaitForService to wait for kubelet
	I0916 23:57:24.322176  804231 kubeadm.go:578] duration metric: took 657.167547ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:24.322199  804231 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:24.327707  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:24.327734  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:24.327748  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:24.327754  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:24.327759  804231 node_conditions.go:105] duration metric: took 5.554046ms to run NodePressure ...
	I0916 23:57:24.327772  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:57:24.327803  804231 start.go:255] writing updated cluster config ...
	I0916 23:57:24.329316  804231 out.go:203] 
	I0916 23:57:24.330356  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:24.330485  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:24.331956  804231 out.go:179] * Starting "ha-472903-m03" control-plane node in "ha-472903" cluster
	I0916 23:57:24.332973  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:57:24.333962  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:24.334852  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:57:24.334875  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:24.334942  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:24.334986  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:24.334997  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:57:24.335117  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:24.357217  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:24.357233  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:24.357242  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:24.357267  804231 start.go:360] acquireMachinesLock for ha-472903-m03: {Name:mk61000bb8e4699ca3310a7fc257e30a156b69de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:24.357354  804231 start.go:364] duration metric: took 71.354µs to acquireMachinesLock for "ha-472903-m03"
	I0916 23:57:24.357375  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:24.357498  804231 start.go:125] createHost starting for "m03" (driver="docker")
	I0916 23:57:24.358917  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:24.358994  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:57:24.359023  804231 client.go:168] LocalClient.Create starting
	I0916 23:57:24.359071  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:57:24.359103  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:24.359116  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:24.359164  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:57:24.359182  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:24.359192  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:24.359366  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:24.375654  804231 network_create.go:77] Found existing network {name:ha-472903 subnet:0xc001b33bf0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:24.375684  804231 kic.go:121] calculated static IP "192.168.49.4" for the "ha-472903-m03" container
	I0916 23:57:24.375740  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:24.392165  804231 cli_runner.go:164] Run: docker volume create ha-472903-m03 --label name.minikube.sigs.k8s.io=ha-472903-m03 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:24.408273  804231 oci.go:103] Successfully created a docker volume ha-472903-m03
	I0916 23:57:24.408342  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m03 --entrypoint /usr/bin/test -v ha-472903-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:24.957699  804231 oci.go:107] Successfully prepared a docker volume ha-472903-m03
	I0916 23:57:24.957748  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:57:24.957783  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:24.957856  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:29.095091  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.13717471s)
	I0916 23:57:29.095123  804231 kic.go:203] duration metric: took 4.137337977s to extract preloaded images to volume ...
	W0916 23:57:29.095214  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:29.095253  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:29.095300  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:29.145859  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903-m03 --name ha-472903-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903-m03 --network ha-472903 --ip 192.168.49.4 --volume ha-472903-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:29.392873  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Running}}
	I0916 23:57:29.412389  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:29.430593  804231 cli_runner.go:164] Run: docker exec ha-472903-m03 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:29.476672  804231 oci.go:144] the created container "ha-472903-m03" has a running status.
	I0916 23:57:29.476707  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa...
	I0916 23:57:29.927926  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:29.927968  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:29.954518  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:29.975503  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:29.975522  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:30.023965  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:30.040966  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:30.041051  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.058157  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.058388  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.058400  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:30.190964  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0916 23:57:30.190995  804231 ubuntu.go:182] provisioning hostname "ha-472903-m03"
	I0916 23:57:30.191059  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.208862  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.209123  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.209144  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m03 && echo "ha-472903-m03" | sudo tee /etc/hostname
	I0916 23:57:30.354363  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0916 23:57:30.354466  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.372285  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.372570  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.372590  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:30.504861  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:30.504898  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:57:30.504920  804231 ubuntu.go:190] setting up certificates
	I0916 23:57:30.504933  804231 provision.go:84] configureAuth start
	I0916 23:57:30.504996  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:30.522218  804231 provision.go:143] copyHostCerts
	I0916 23:57:30.522259  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:30.522297  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:57:30.522306  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:30.522369  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:57:30.522483  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:30.522506  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:57:30.522510  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:30.522547  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:57:30.522650  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:30.522673  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:57:30.522678  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:30.522703  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:57:30.522769  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m03 san=[127.0.0.1 192.168.49.4 ha-472903-m03 localhost minikube]
	I0916 23:57:30.644066  804231 provision.go:177] copyRemoteCerts
	I0916 23:57:30.644118  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:30.644153  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.661612  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:30.757452  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:30.757504  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:30.782942  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:30.782994  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:30.806508  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:30.806562  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:30.829686  804231 provision.go:87] duration metric: took 324.735799ms to configureAuth
	I0916 23:57:30.829709  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:30.829902  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:30.829916  804231 machine.go:96] duration metric: took 788.930334ms to provisionDockerMachine
	I0916 23:57:30.829925  804231 client.go:171] duration metric: took 6.470893656s to LocalClient.Create
	I0916 23:57:30.829958  804231 start.go:167] duration metric: took 6.470963089s to libmachine.API.Create "ha-472903"
	I0916 23:57:30.829971  804231 start.go:293] postStartSetup for "ha-472903-m03" (driver="docker")
	I0916 23:57:30.829982  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:30.830042  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:30.830092  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.847215  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:30.945849  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:30.949055  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:30.949086  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:30.949098  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:30.949107  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:30.949120  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:57:30.949174  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:57:30.949274  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:57:30.949286  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:57:30.949392  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:30.957998  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:30.983779  804231 start.go:296] duration metric: took 153.794843ms for postStartSetup
	I0916 23:57:30.984109  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:31.001367  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:31.001618  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:31.001659  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.019034  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.110814  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:31.115046  804231 start.go:128] duration metric: took 6.757532739s to createHost
	I0916 23:57:31.115072  804231 start.go:83] releasing machines lock for "ha-472903-m03", held for 6.757707303s
	I0916 23:57:31.115154  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:31.133371  804231 out.go:179] * Found network options:
	I0916 23:57:31.134481  804231 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 23:57:31.135570  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135598  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135626  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135644  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:31.135714  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:31.135763  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.135778  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:31.135845  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.152320  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.153909  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.320495  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:31.348141  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:31.348214  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:31.373693  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:31.373720  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:57:31.373748  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:31.373802  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:57:31.385560  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:31.396165  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:57:31.396214  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:57:31.409119  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:57:31.422244  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:57:31.489491  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:57:31.557098  804231 docker.go:234] disabling docker service ...
	I0916 23:57:31.557149  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:57:31.574601  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:57:31.585773  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:57:31.649988  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:57:31.717070  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:31.727904  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:31.743685  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:31.755962  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:31.766072  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:31.766138  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:31.775522  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:31.785914  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:31.795134  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:31.804565  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:31.813319  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:31.822500  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:31.831597  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:31.840887  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:31.848842  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:31.857026  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:31.920521  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:32.022746  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:57:32.022804  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:57:32.026838  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:57:32.026888  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:57:32.030295  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:32.064100  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:57:32.064158  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:32.088276  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:32.114182  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:57:32.115194  804231 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:32.116236  804231 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 23:57:32.117151  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:32.133290  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:32.136901  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:32.147860  804231 mustload.go:65] Loading cluster: ha-472903
	I0916 23:57:32.148060  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:32.148275  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:57:32.164278  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:32.164570  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.4
	I0916 23:57:32.164584  804231 certs.go:194] generating shared ca certs ...
	I0916 23:57:32.164601  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.164751  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:57:32.164800  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:57:32.164814  804231 certs.go:256] generating profile certs ...
	I0916 23:57:32.164911  804231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:57:32.164940  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8
	I0916 23:57:32.164958  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 23:57:32.342596  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 ...
	I0916 23:57:32.342623  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8: {Name:mk455c3f0ae4544ddcdf75c25cbd1b87a24e61a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.342787  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8 ...
	I0916 23:57:32.342799  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8: {Name:mkbd551bf9ae23c129f7e263550d20b4aac5d095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.342871  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:57:32.343007  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:57:32.343136  804231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:57:32.343152  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:32.343165  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:32.343178  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:32.343191  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:32.343204  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:32.343214  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:32.343229  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:32.343247  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:32.343299  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:57:32.343327  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:32.343337  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:32.343357  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:32.343379  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:32.343400  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:57:32.343464  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:32.343501  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.343521  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.343534  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.343588  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:32.360782  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:32.447595  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:32.451217  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:32.464033  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:32.467273  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:32.478860  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:32.482180  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:32.493717  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:32.496761  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:32.507849  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:32.511054  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:32.523733  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:32.526954  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 23:57:32.538314  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:32.561866  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:32.585900  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:32.610048  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:32.634812  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 23:57:32.659163  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:32.682157  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:32.704663  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:32.727856  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:32.752740  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:57:32.775900  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:57:32.798720  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:32.815542  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:32.832241  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:32.848964  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:32.865780  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:32.882614  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 23:57:32.899296  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:32.916516  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:57:32.921611  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:57:32.930917  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.934241  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.934283  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.941354  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:32.950335  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:32.959292  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.962576  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.962623  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.968989  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:32.978331  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:57:32.987188  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.990463  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.990497  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.996813  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:33.005924  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:33.009122  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:33.009183  804231 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0916 23:57:33.009266  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:33.009291  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:33.009319  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:33.021189  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:33.021246  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:33.021293  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:33.029533  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:33.029576  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:33.038861  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 23:57:33.056092  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:33.075506  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:33.093918  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:33.097171  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:33.107668  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:33.167706  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:33.188453  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:33.188671  804231 start.go:317] joinCluster: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:33.188781  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:33.188819  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:33.210165  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:33.351871  804231 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:33.351930  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uj456s.97hymgg3kmg6owuv --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0916 23:57:51.860237  804231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uj456s.97hymgg3kmg6owuv --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (18.508258539s)
	I0916 23:57:51.860308  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:52.080986  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903-m03 minikube.k8s.io/updated_at=2025_09_16T23_57_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=false
	I0916 23:57:52.152525  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-472903-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:52.226560  804231 start.go:319] duration metric: took 19.037884553s to joinCluster
	I0916 23:57:52.226624  804231 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:52.226912  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:52.227744  804231 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:52.228620  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:52.334638  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:52.349036  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:52.349105  804231 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:52.349317  804231 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m03" to be "Ready" ...
	I0916 23:57:54.352346  804231 node_ready.go:49] node "ha-472903-m03" is "Ready"
	I0916 23:57:54.352374  804231 node_ready.go:38] duration metric: took 2.003044453s for node "ha-472903-m03" to be "Ready" ...
	I0916 23:57:54.352389  804231 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:54.352476  804231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:54.365259  804231 api_server.go:72] duration metric: took 2.138606454s to wait for apiserver process to appear ...
	I0916 23:57:54.365280  804231 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:54.365298  804231 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:54.370985  804231 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:54.371831  804231 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:54.371850  804231 api_server.go:131] duration metric: took 6.564025ms to wait for apiserver health ...
	I0916 23:57:54.371858  804231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:54.376785  804231 system_pods.go:59] 27 kube-system pods found
	I0916 23:57:54.376811  804231 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:54.376815  804231 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:54.376818  804231 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:54.376822  804231 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0916 23:57:54.376824  804231 system_pods.go:61] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Pending
	I0916 23:57:54.376830  804231 system_pods.go:61] "kindnet-2dqnn" [f5c4164d-0d88-4b7b-bc52-18a7e211fe98] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2dqnn": pod kindnet-2dqnn is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376833  804231 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:54.376838  804231 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0916 23:57:54.376842  804231 system_pods.go:61] "kindnet-wwdfr" [e86a6e30-712e-4d39-a235-87489d16c0f3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wwdfr": pod kindnet-wwdfr is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376849  804231 system_pods.go:61] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Pending: PodScheduled:SchedulerError (pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) is in the cache, so can't be assumed)
	I0916 23:57:54.376853  804231 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:54.376858  804231 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running
	I0916 23:57:54.376861  804231 system_pods.go:61] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Pending
	I0916 23:57:54.376867  804231 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:54.376870  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0916 23:57:54.376873  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Pending
	I0916 23:57:54.376876  804231 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0916 23:57:54.376881  804231 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:54.376885  804231 system_pods.go:61] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-kn6nb": pod kube-proxy-kn6nb is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376889  804231 system_pods.go:61] "kube-proxy-xhlnz" [1967fed1-7529-46d0-accd-ab74751b47fa] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-xhlnz": pod kube-proxy-xhlnz is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376894  804231 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:54.376897  804231 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0916 23:57:54.376900  804231 system_pods.go:61] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Pending
	I0916 23:57:54.376904  804231 system_pods.go:61] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:54.376907  804231 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0916 23:57:54.376910  804231 system_pods.go:61] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Pending
	I0916 23:57:54.376913  804231 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:54.376918  804231 system_pods.go:74] duration metric: took 5.052009ms to wait for pod list to return data ...
	I0916 23:57:54.376925  804231 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:54.378969  804231 default_sa.go:45] found service account: "default"
	I0916 23:57:54.378989  804231 default_sa.go:55] duration metric: took 2.056584ms for default service account to be created ...
	I0916 23:57:54.378999  804231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:54.383753  804231 system_pods.go:86] 27 kube-system pods found
	I0916 23:57:54.383781  804231 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:54.383790  804231 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:54.383796  804231 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:54.383802  804231 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0916 23:57:54.383812  804231 system_pods.go:89] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Pending
	I0916 23:57:54.383821  804231 system_pods.go:89] "kindnet-2dqnn" [f5c4164d-0d88-4b7b-bc52-18a7e211fe98] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2dqnn": pod kindnet-2dqnn is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383829  804231 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:54.383837  804231 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0916 23:57:54.383842  804231 system_pods.go:89] "kindnet-wwdfr" [e86a6e30-712e-4d39-a235-87489d16c0f3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wwdfr": pod kindnet-wwdfr is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383852  804231 system_pods.go:89] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Pending: PodScheduled:SchedulerError (pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) is in the cache, so can't be assumed)
	I0916 23:57:54.383863  804231 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:54.383874  804231 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running
	I0916 23:57:54.383881  804231 system_pods.go:89] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Pending
	I0916 23:57:54.383887  804231 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:54.383895  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0916 23:57:54.383900  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Pending
	I0916 23:57:54.383908  804231 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0916 23:57:54.383913  804231 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:54.383921  804231 system_pods.go:89] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-kn6nb": pod kube-proxy-kn6nb is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383930  804231 system_pods.go:89] "kube-proxy-xhlnz" [1967fed1-7529-46d0-accd-ab74751b47fa] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-xhlnz": pod kube-proxy-xhlnz is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383939  804231 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:54.383946  804231 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0916 23:57:54.383955  804231 system_pods.go:89] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Pending
	I0916 23:57:54.383962  804231 system_pods.go:89] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:54.383967  804231 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0916 23:57:54.383975  804231 system_pods.go:89] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Pending
	I0916 23:57:54.383980  804231 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:54.383991  804231 system_pods.go:126] duration metric: took 4.985254ms to wait for k8s-apps to be running ...
	I0916 23:57:54.384002  804231 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:54.384056  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:54.395540  804231 system_svc.go:56] duration metric: took 11.532177ms WaitForService to wait for kubelet
	I0916 23:57:54.395557  804231 kubeadm.go:578] duration metric: took 2.168909422s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:54.395577  804231 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:54.398165  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398183  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398194  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398197  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398201  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398205  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398209  804231 node_conditions.go:105] duration metric: took 2.627179ms to run NodePressure ...
	I0916 23:57:54.398219  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:57:54.398248  804231 start.go:255] writing updated cluster config ...
	I0916 23:57:54.398554  804231 ssh_runner.go:195] Run: rm -f paused
	I0916 23:57:54.402187  804231 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:57:54.402686  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:57:54.405144  804231 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c94hz" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.409401  804231 pod_ready.go:94] pod "coredns-66bc5c9577-c94hz" is "Ready"
	I0916 23:57:54.409438  804231 pod_ready.go:86] duration metric: took 4.271645ms for pod "coredns-66bc5c9577-c94hz" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.409448  804231 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qn8m7" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.413536  804231 pod_ready.go:94] pod "coredns-66bc5c9577-qn8m7" is "Ready"
	I0916 23:57:54.413553  804231 pod_ready.go:86] duration metric: took 4.095453ms for pod "coredns-66bc5c9577-qn8m7" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.415699  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.419599  804231 pod_ready.go:94] pod "etcd-ha-472903" is "Ready"
	I0916 23:57:54.419618  804231 pod_ready.go:86] duration metric: took 3.899664ms for pod "etcd-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.419627  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.423363  804231 pod_ready.go:94] pod "etcd-ha-472903-m02" is "Ready"
	I0916 23:57:54.423380  804231 pod_ready.go:86] duration metric: took 3.746731ms for pod "etcd-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.423386  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.603706  804231 request.go:683] "Waited before sending request" delay="180.227617ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-472903-m03"
	I0916 23:57:54.803902  804231 request.go:683] "Waited before sending request" delay="197.349252ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:55.003954  804231 request.go:683] "Waited before sending request" delay="80.206914ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-472903-m03"
	I0916 23:57:55.203362  804231 request.go:683] "Waited before sending request" delay="196.197515ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:55.206052  804231 pod_ready.go:94] pod "etcd-ha-472903-m03" is "Ready"
	I0916 23:57:55.206075  804231 pod_ready.go:86] duration metric: took 782.683771ms for pod "etcd-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.403450  804231 request.go:683] "Waited before sending request" delay="197.254129ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0916 23:57:55.406629  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.604081  804231 request.go:683] "Waited before sending request" delay="197.327981ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903"
	I0916 23:57:55.803277  804231 request.go:683] "Waited before sending request" delay="196.28238ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:55.806023  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903" is "Ready"
	I0916 23:57:55.806053  804231 pod_ready.go:86] duration metric: took 399.400731ms for pod "kube-apiserver-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.806064  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.003360  804231 request.go:683] "Waited before sending request" delay="197.181089ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903-m02"
	I0916 23:57:56.203591  804231 request.go:683] "Waited before sending request" delay="197.334062ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:56.206593  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903-m02" is "Ready"
	I0916 23:57:56.206619  804231 pod_ready.go:86] duration metric: took 400.548564ms for pod "kube-apiserver-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.206627  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.404053  804231 request.go:683] "Waited before sending request" delay="197.330591ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903-m03"
	I0916 23:57:56.603366  804231 request.go:683] "Waited before sending request" delay="196.334008ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:56.606216  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903-m03" is "Ready"
	I0916 23:57:56.606240  804231 pod_ready.go:86] duration metric: took 399.60823ms for pod "kube-apiserver-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.803696  804231 request.go:683] "Waited before sending request" delay="197.341894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0916 23:57:56.806878  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.003237  804231 request.go:683] "Waited before sending request" delay="196.261492ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903"
	I0916 23:57:57.203189  804231 request.go:683] "Waited before sending request" delay="197.16206ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:57.205847  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903" is "Ready"
	I0916 23:57:57.205870  804231 pod_ready.go:86] duration metric: took 398.97003ms for pod "kube-controller-manager-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.205878  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.403223  804231 request.go:683] "Waited before sending request" delay="197.233762ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903-m02"
	I0916 23:57:57.603503  804231 request.go:683] "Waited before sending request" delay="197.308924ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:57.606309  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903-m02" is "Ready"
	I0916 23:57:57.606331  804231 pod_ready.go:86] duration metric: took 400.447455ms for pod "kube-controller-manager-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.606339  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.803572  804231 request.go:683] "Waited before sending request" delay="197.156861ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903-m03"
	I0916 23:57:58.003564  804231 request.go:683] "Waited before sending request" delay="197.308739ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:58.006495  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903-m03" is "Ready"
	I0916 23:57:58.006527  804231 pod_ready.go:86] duration metric: took 400.177209ms for pod "kube-controller-manager-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.203971  804231 request.go:683] "Waited before sending request" delay="197.330656ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0916 23:57:58.207087  804231 pod_ready.go:83] waiting for pod "kube-proxy-58lkb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.403484  804231 request.go:683] "Waited before sending request" delay="196.298118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-58lkb"
	I0916 23:57:58.603727  804231 request.go:683] "Waited before sending request" delay="197.238459ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:58.606561  804231 pod_ready.go:94] pod "kube-proxy-58lkb" is "Ready"
	I0916 23:57:58.606586  804231 pod_ready.go:86] duration metric: took 399.476011ms for pod "kube-proxy-58lkb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.606593  804231 pod_ready.go:83] waiting for pod "kube-proxy-d4m8f" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.804003  804231 request.go:683] "Waited before sending request" delay="197.323847ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d4m8f"
	I0916 23:57:59.003937  804231 request.go:683] "Waited before sending request" delay="197.340178ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:59.006899  804231 pod_ready.go:94] pod "kube-proxy-d4m8f" is "Ready"
	I0916 23:57:59.006927  804231 pod_ready.go:86] duration metric: took 400.327971ms for pod "kube-proxy-d4m8f" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:59.006938  804231 pod_ready.go:83] waiting for pod "kube-proxy-kn6nb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:59.203366  804231 request.go:683] "Waited before sending request" delay="196.341882ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kn6nb"
	I0916 23:57:59.403608  804231 request.go:683] "Waited before sending request" delay="197.193431ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:59.604047  804231 request.go:683] "Waited before sending request" delay="96.244025ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kn6nb"
	I0916 23:57:59.803112  804231 request.go:683] "Waited before sending request" delay="196.282766ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:58:00.203120  804231 request.go:683] "Waited before sending request" delay="192.276334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:58:00.603459  804231 request.go:683] "Waited before sending request" delay="93.218157ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	W0916 23:58:01.014543  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:03.512871  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:06.012965  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:08.512763  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:11.012966  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:13.013166  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:15.512655  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:18.012615  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:20.513188  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:23.012908  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:25.013240  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:27.512733  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:30.012142  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:32.012503  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:34.013070  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:36.512643  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	I0916 23:58:37.014670  804231 pod_ready.go:94] pod "kube-proxy-kn6nb" is "Ready"
	I0916 23:58:37.014697  804231 pod_ready.go:86] duration metric: took 38.007753603s for pod "kube-proxy-kn6nb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.017732  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.022228  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903" is "Ready"
	I0916 23:58:37.022246  804231 pod_ready.go:86] duration metric: took 4.488553ms for pod "kube-scheduler-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.022253  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.026173  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903-m02" is "Ready"
	I0916 23:58:37.026191  804231 pod_ready.go:86] duration metric: took 3.932068ms for pod "kube-scheduler-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.026198  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.030029  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903-m03" is "Ready"
	I0916 23:58:37.030046  804231 pod_ready.go:86] duration metric: took 3.843487ms for pod "kube-scheduler-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.030054  804231 pod_ready.go:40] duration metric: took 42.627839542s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:58:37.073472  804231 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0916 23:58:37.074923  804231 out.go:179] * Done! kubectl is now configured to use "ha-472903" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0a41d8b587e02       8c811b4aec35f       13 minutes ago      Running             busybox                   0                   a2422ee3e6e6d       busybox-7b57f96db7-6hrm6
	f33de265effb1       6e38f40d628db       14 minutes ago      Running             storage-provisioner       1                   1c0713f862ea0       storage-provisioner
	9f103b05d2d6f       52546a367cc9e       14 minutes ago      Running             coredns                   0                   9579263342827       coredns-66bc5c9577-c94hz
	3b457407f10e3       52546a367cc9e       14 minutes ago      Running             coredns                   0                   290cfb537788e       coredns-66bc5c9577-qn8m7
	cc69d2451cb65       409467f978b4a       14 minutes ago      Running             kindnet-cni               0                   3e17d6ae9b2a6       kindnet-lh7dv
	f4767b6363ce9       6e38f40d628db       14 minutes ago      Exited              storage-provisioner       0                   1c0713f862ea0       storage-provisioner
	92dd4d116eb03       df0860106674d       14 minutes ago      Running             kube-proxy                0                   8c0ecd5301326       kube-proxy-d4m8f
	3cb75495f7a54       765655ea60781       14 minutes ago      Running             kube-vip                  0                   4c425da29992d       kube-vip-ha-472903
	bba28cace6502       46169d968e920       15 minutes ago      Running             kube-scheduler            0                   f18dd7697c60f       kube-scheduler-ha-472903
	087290a41f59c       a0af72f2ec6d6       15 minutes ago      Running             kube-controller-manager   0                   0760ebe1d2a56       kube-controller-manager-ha-472903
	0aba62132d764       90550c43ad2bc       15 minutes ago      Running             kube-apiserver            0                   8ad1fa8bc0267       kube-apiserver-ha-472903
	23c0af0bdbe95       5f1f5298c888d       15 minutes ago      Running             etcd                      0                   b01a62742caec       etcd-ha-472903
	
	
	==> containerd <==
	Sep 16 23:57:20 ha-472903 containerd[765]: time="2025-09-16T23:57:20.857383931Z" level=info msg="StartContainer for \"9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315\""
	Sep 16 23:57:20 ha-472903 containerd[765]: time="2025-09-16T23:57:20.915209442Z" level=info msg="StartContainer for \"9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315\" returns successfully"
	Sep 16 23:57:26 ha-472903 containerd[765]: time="2025-09-16T23:57:26.847849669Z" level=info msg="received exit event container_id:\"f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8\"  id:\"f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8\"  pid:2188  exit_status:1  exited_at:{seconds:1758067046  nanos:847300745}"
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084468964Z" level=info msg="shim disconnected" id=f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8 namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084514637Z" level=warning msg="cleaning up after shim disconnected" id=f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8 namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084528446Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.861023305Z" level=info msg="CreateContainer within sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.875038922Z" level=info msg="CreateContainer within sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\""
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.875884762Z" level=info msg="StartContainer for \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\""
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.929708067Z" level=info msg="StartContainer for \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\" returns successfully"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.362974621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-6hrm6,Uid:bd03bad4-af1e-42d0-81fb-6fcaeaa8775e,Namespace:default,Attempt:0,}"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.455106923Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.455480779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-6hrm6,Uid:bd03bad4-af1e-42d0-81fb-6fcaeaa8775e,Namespace:default,Attempt:0,} returns sandbox id \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\""
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.457290181Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.440332779Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.440968214Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.442025332Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.443719507Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.444221405Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 1.986887608s"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.444254598Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.447875079Z" level=info msg="CreateContainer within sandbox \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.457018566Z" level=info msg="CreateContainer within sandbox \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.457508138Z" level=info msg="StartContainer for \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.510633374Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.512731136Z" level=info msg="StartContainer for \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\" returns successfully"
	
	
	==> coredns [3b457407f10e357ce33da7fa3fb4333f8312f0d3e3570cf8528cdcac8f5a1d0f] <==
	[INFO] 10.244.1.2:57899 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.012540337s
	[INFO] 10.244.1.2:54323 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.008980197s
	[INFO] 10.244.1.2:53799 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.009949044s
	[INFO] 10.244.0.4:39485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157098s
	[INFO] 10.244.0.4:57871 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000750185s
	[INFO] 10.244.0.4:53410 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000089028s
	[INFO] 10.244.1.2:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150317s
	[INFO] 10.244.1.2:59346 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028128363s
	[INFO] 10.244.1.2:43091 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01004668s
	[INFO] 10.244.1.2:37227 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000191819s
	[INFO] 10.244.1.2:40079 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125376s
	[INFO] 10.244.0.4:38168 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181114s
	[INFO] 10.244.0.4:60067 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000087147s
	[INFO] 10.244.0.4:47611 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122939s
	[INFO] 10.244.0.4:37626 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121195s
	[INFO] 10.244.1.2:42817 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159509s
	[INFO] 10.244.1.2:33910 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186538s
	[INFO] 10.244.1.2:37929 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109836s
	[INFO] 10.244.0.4:50698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212263s
	[INFO] 10.244.0.4:33166 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100167s
	[INFO] 10.244.1.2:50377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157558s
	[INFO] 10.244.1.2:39491 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132025s
	[INFO] 10.244.1.2:50075 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112028s
	[INFO] 10.244.0.4:58743 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149175s
	[INFO] 10.244.0.4:52796 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114946s
	
	
	==> coredns [9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45239 - 14115 "HINFO IN 5883645869461503498.3950535614037284853. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058516241s
	[INFO] 10.244.1.2:55352 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003252862s
	[INFO] 10.244.0.4:33650 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001640931s
	[INFO] 10.244.0.4:50077 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000621363s
	[INFO] 10.244.1.2:48439 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189187s
	[INFO] 10.244.1.2:39582 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151327s
	[INFO] 10.244.1.2:59539 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140715s
	[INFO] 10.244.0.4:42999 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177514s
	[INFO] 10.244.0.4:36769 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010694753s
	[INFO] 10.244.0.4:53074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158932s
	[INFO] 10.244.0.4:57223 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012213s
	[INFO] 10.244.1.2:50810 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176678s
	[INFO] 10.244.0.4:58045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142445s
	[INFO] 10.244.0.4:39777 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123555s
	[INFO] 10.244.1.2:59022 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148853s
	[INFO] 10.244.0.4:45136 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001657s
	[INFO] 10.244.0.4:37711 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134332s
	
	
	==> describe nodes <==
	Name:               ha-472903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_56_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:56:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:11:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-472903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac22e2ab5b0349cdb9474983aa23278e
	  System UUID:                695af4c7-28fb-4299-9454-75db3262ca2c
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6hrm6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-c94hz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 coredns-66bc5c9577-qn8m7             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 etcd-ha-472903                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kindnet-lh7dv                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-472903             250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-472903    200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-d4m8f                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-472903             100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-472903                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	
	
	Name:               ha-472903-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:11:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:07:45 +0000   Tue, 16 Sep 2025 23:57:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-472903-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 4094672df3d84509ae4c88c54f7f5e93
	  System UUID:                85df9db8-f21a-4038-9f8c-4cc1d81dc0d5
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-4jfjt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-472903-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kindnet-q7c7s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-472903-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-472903-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-58lkb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-472903-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-472903-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        14m   kube-proxy       
	  Normal  RegisteredNode  14m   node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode  14m   node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	
	
	Name:               ha-472903-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:11:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-472903-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 9964c713c65f4333be8a877aab744040
	  System UUID:                7eb7f2ee-a32d-4876-a4ad-58f745b9c377
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-mknzs                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-472903-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-x6twd                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-472903-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-472903-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-kn6nb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-472903-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-472903-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 e8 75 4b 01 57 08 06
	[  +0.025562] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[ +13.150028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 5c f0 26 cd ba 08 06
	[  +0.000341] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 20 90 fb f5 d8 08 06
	[ +28.639349] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 26 63 8d db 90 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[  +0.836892] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 cc 9b 52 38 94 08 06
	[  +0.080327] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	[Sep16 23:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[ +20.325550] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 39 4b 41 df 63 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[  +8.925776] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e cd c1 f7 dc c8 08 06
	[  +0.000373] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	
	
	==> etcd [23c0af0bdbe9526d53769461ed9f80d8c743b02e625b65cce39c888f5e7d4b4e] <==
	{"level":"warn","ts":"2025-09-16T23:57:38.539376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:45372","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:57:38.542781Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(4226730353838347643 12366044076840555621 12593026477526642892)"}
	{"level":"info","ts":"2025-09-16T23:57:38.542928Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:38.542988Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-16T23:57:40.311787Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"3aa85cdcd5e5557b","bytes":876533,"size":"876 kB","took":"30.009467109s"}
	{"level":"info","ts":"2025-09-16T23:57:47.400606Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:57:51.874557Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:06.103123Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:08.299219Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"ab9d0391dce79465","bytes":1356737,"size":"1.4 MB","took":"30.011071692s"}
	{"level":"info","ts":"2025-09-17T00:06:46.502551Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1554}
	{"level":"info","ts":"2025-09-17T00:06:46.523688Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1554,"took":"20.616779ms","hash":4277915431,"current-db-size-bytes":3936256,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2105344,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-09-17T00:06:46.523839Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4277915431,"revision":1554,"compact-revision":-1}
	{"level":"info","ts":"2025-09-17T00:10:51.037991Z","caller":"traceutil/trace.go:172","msg":"trace[1596502853] transaction","detail":"{read_only:false; response_revision:2892; number_of_response:1; }","duration":"106.292545ms","start":"2025-09-17T00:10:50.931676Z","end":"2025-09-17T00:10:51.037969Z","steps":["trace[1596502853] 'process raft request'  (duration: 106.163029ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:10:52.331973Z","caller":"traceutil/trace.go:172","msg":"trace[583569919] transaction","detail":"{read_only:false; response_revision:2894; number_of_response:1; }","duration":"112.232554ms","start":"2025-09-17T00:10:52.219723Z","end":"2025-09-17T00:10:52.331956Z","steps":["trace[583569919] 'process raft request'  (duration: 112.100203ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:11:09.266390Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.274935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:602"}
	{"level":"info","ts":"2025-09-17T00:11:09.266493Z","caller":"traceutil/trace.go:172","msg":"trace[316861325] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2934; }","duration":"165.393135ms","start":"2025-09-17T00:11:09.101086Z","end":"2025-09-17T00:11:09.266479Z","steps":["trace[316861325] 'range keys from in-memory index tree'  (duration: 164.766592ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:11:09.393171Z","caller":"traceutil/trace.go:172","msg":"trace[484529161] transaction","detail":"{read_only:false; response_revision:2935; number_of_response:1; }","duration":"123.717206ms","start":"2025-09-17T00:11:09.269439Z","end":"2025-09-17T00:11:09.393156Z","steps":["trace[484529161] 'process raft request'  (duration: 123.599826ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:11:09.634612Z","caller":"traceutil/trace.go:172","msg":"trace[1840342263] transaction","detail":"{read_only:false; response_revision:2936; number_of_response:1; }","duration":"177.817508ms","start":"2025-09-17T00:11:09.456780Z","end":"2025-09-17T00:11:09.634597Z","steps":["trace[1840342263] 'process raft request'  (duration: 177.726281ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:11:45.636591Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","error":"unexpected EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:45.636724Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"3aa85cdcd5e5557b","error":"failed to read 3aa85cdcd5e5557b on stream Message (unexpected EOF)"}
	{"level":"warn","ts":"2025-09-17T00:11:45.636591Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","error":"unexpected EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:45.711307Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b"}
	{"level":"info","ts":"2025-09-17T00:11:46.508111Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2296}
	{"level":"info","ts":"2025-09-17T00:11:46.524612Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2296,"took":"16.037136ms","hash":1066647384,"current-db-size-bytes":3936256,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":1675264,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-09-17T00:11:46.524663Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1066647384,"revision":2296,"compact-revision":1554}
	
	
	==> kernel <==
	 00:11:47 up  2:54,  0 users,  load average: 0.64, 0.48, 0.82
	Linux ha-472903 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [cc69d2451cb65860b5bc78e027be2fc1cb0f9fa6542b4abe3bc1ff1c90a8fe60] <==
	I0917 00:11:07.512233       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:11:17.511478       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:11:17.511518       1 main.go:301] handling current node
	I0917 00:11:17.511535       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:11:17.511540       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:11:17.511701       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:11:17.511709       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:11:27.503508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:11:27.503550       1 main.go:301] handling current node
	I0917 00:11:27.503570       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:11:27.503577       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:11:27.503775       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:11:27.503786       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:11:37.511483       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:11:37.511527       1 main.go:301] handling current node
	I0917 00:11:37.511545       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:11:37.511550       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:11:37.511770       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:11:37.511783       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:11:47.503957       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:11:47.503984       1 main.go:301] handling current node
	I0917 00:11:47.503999       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:11:47.504022       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:11:47.504217       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:11:47.504230       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0aba62132d764965d8e1a80a4a6345bb7e34892b23143da4a7af3450cd465d6c] <==
	I0917 00:06:06.800617       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:06:32.710262       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:06:47.441344       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:07:34.732036       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:07:42.022448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:08:46.236959       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:08:51.159386       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:52.603432       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:53.014406       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0917 00:10:41.954540       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37534: use of closed network connection
	E0917 00:10:42.122977       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37556: use of closed network connection
	E0917 00:10:42.250606       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37572: use of closed network connection
	E0917 00:10:42.442469       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37584: use of closed network connection
	E0917 00:10:42.605380       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37602: use of closed network connection
	E0917 00:10:42.730284       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37612: use of closed network connection
	E0917 00:10:42.884291       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37626: use of closed network connection
	E0917 00:10:43.036952       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37644: use of closed network connection
	E0917 00:10:43.161098       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37658: use of closed network connection
	E0917 00:10:45.408563       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37722: use of closed network connection
	E0917 00:10:45.568465       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37752: use of closed network connection
	E0917 00:10:45.727267       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37770: use of closed network connection
	E0917 00:10:45.883182       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37790: use of closed network connection
	E0917 00:10:46.004301       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37814: use of closed network connection
	I0917 00:10:57.282648       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:10:57.462257       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [087290a41f59caa4f9bc89759bcec6cf90f47c8a2ab83b7c671a8fff35641df9] <==
	I0916 23:56:54.728442       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0916 23:56:54.728466       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:56:54.728485       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0916 23:56:54.728644       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0916 23:56:54.728665       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0916 23:56:54.728648       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0916 23:56:54.728914       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0916 23:56:54.730175       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0916 23:56:54.730201       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0916 23:56:54.732432       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:56:54.733452       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:56:54.735655       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:56:54.735714       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:56:54.735760       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:56:54.735767       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:56:54.735772       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:56:54.740680       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903" podCIDRs=["10.244.0.0/24"]
	I0916 23:56:54.749950       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:22.933124       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m02\" does not exist"
	I0916 23:57:22.943785       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:24.681339       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m02"
	I0916 23:57:51.749676       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m03\" does not exist"
	I0916 23:57:51.772476       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m03" podCIDRs=["10.244.2.0/24"]
	E0916 23:57:51.829801       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"3f5da9fc-6769-4ca8-a715-edeace44c646\", ResourceVersion:\"594\", Generation:1, CreationTimestamp:time.Date(2025, time.September, 16, 23, 56, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00222d0e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"
\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSourc
e)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0021ed7c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdcf8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtu
alDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.34.0\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00144a7e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Re
sourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Life
cycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0019549c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001900b18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ba1200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", Tole
rationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e14570)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001900b70)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailab
le:2, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:57:54.685322       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m03"
	
	
	==> kube-proxy [92dd4d116eb0387dded82fb32d35690ec2d00e3f5e7ac81bf7aea0c6814edd5e] <==
	I0916 23:56:56.831012       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:56:56.891635       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:56:56.991820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:56:56.991862       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:56:56.991952       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:56:57.015955       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:56:57.016001       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:56:57.021120       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:56:57.021457       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:56:57.021499       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:56:57.024872       1 config.go:200] "Starting service config controller"
	I0916 23:56:57.024892       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:56:57.024900       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:56:57.024909       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:56:57.024890       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:56:57.024917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:56:57.024937       1 config.go:309] "Starting node config controller"
	I0916 23:56:57.024942       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:56:57.125608       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:56:57.125691       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0916 23:56:57.125856       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:56:57.125902       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [bba28cace6502de93aa43db4fb51671581c5074990dea721d98d36d839734a67] <==
	E0916 23:56:48.619869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:56:48.649766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:56:48.673092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I0916 23:56:49.170967       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 23:57:51.780040       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:57:51.780142       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	E0916 23:57:51.780183       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	I0916 23:57:51.782132       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:58:37.948695       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	E0916 23:58:37.948846       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 565a634f-ab41-4776-ba5d-63a601bfec48(default/busybox-7b57f96db7-x6xc9) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	E0916 23:58:37.948875       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	I0916 23:58:37.950251       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	I0916 23:58:37.966099       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="47b06c15-c007-4c50-a248-5411a0f4b6a7" pod="default/busybox-7b57f96db7-4jfjt" assumedNode="ha-472903-m02" currentNode="ha-472903"
	E0916 23:58:37.968241       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903"
	E0916 23:58:37.968351       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 47b06c15-c007-4c50-a248-5411a0f4b6a7(default/busybox-7b57f96db7-4jfjt) was assumed on ha-472903 but assigned to ha-472903-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	E0916 23:58:37.968376       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	I0916 23:58:37.969472       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903-m02"
	E0916 23:58:38.002469       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-wp95z" node="ha-472903"
	E0916 23:58:38.002779       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:38.046394       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-xnrsc\" not found" pod="default/busybox-7b57f96db7-xnrsc"
	E0916 23:58:38.046880       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-wp95z\" not found" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:40.050124       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	E0916 23:58:40.050213       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod bd03bad4-af1e-42d0-81fb-6fcaeaa8775e(default/busybox-7b57f96db7-6hrm6) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	E0916 23:58:40.050248       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	I0916 23:58:40.051853       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	
	
	==> kubelet <==
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.235025    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62 podName:cc7a8d10-408f-4655-ac70-54b4af22d9eb nodeName:}" failed. No retries permitted until 2025-09-16 23:58:38.735007966 +0000 UTC m=+109.066439678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hrb62" (UniqueName: "kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62") pod "busybox-7b57f96db7-5pwbb" (UID: "cc7a8d10-408f-4655-ac70-54b4af22d9eb") : failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737179    1676 projected.go:196] Error preparing data for projected volume kube-api-access-xrpwc for pod default/busybox-7b57f96db7-xj7ks: failed to fetch token: pod "busybox-7b57f96db7-xj7ks" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737266    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc podName:cac915f6-7630-4320-b6d2-fd18f3c19a17 nodeName:}" failed. No retries permitted until 2025-09-16 23:58:39.737245356 +0000 UTC m=+110.068677057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xrpwc" (UniqueName: "kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc") pod "busybox-7b57f96db7-xj7ks" (UID: "cac915f6-7630-4320-b6d2-fd18f3c19a17") : failed to fetch token: pod "busybox-7b57f96db7-xj7ks" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737179    1676 projected.go:196] Error preparing data for projected volume kube-api-access-hrb62 for pod default/busybox-7b57f96db7-5pwbb: failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737371    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62 podName:cc7a8d10-408f-4655-ac70-54b4af22d9eb nodeName:}" failed. No retries permitted until 2025-09-16 23:58:39.737351933 +0000 UTC m=+110.068783647 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hrb62" (UniqueName: "kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62") pod "busybox-7b57f96db7-5pwbb" (UID: "cc7a8d10-408f-4655-ac70-54b4af22d9eb") : failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.027158    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.028111    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.039445    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.042381    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138755    1676 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9njqf\" (UniqueName: \"kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf\") pod \"59b9a23c-498d-4802-9790-70931c4a2c06\" (UID: \"59b9a23c-498d-4802-9790-70931c4a2c06\") "
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138821    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hrb62\" (UniqueName: \"kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138836    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xrpwc\" (UniqueName: \"kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.140952    1676 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf" (OuterVolumeSpecName: "kube-api-access-9njqf") pod "59b9a23c-498d-4802-9790-70931c4a2c06" (UID: "59b9a23c-498d-4802-9790-70931c4a2c06"). InnerVolumeSpecName "kube-api-access-9njqf". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.239025    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9njqf\" (UniqueName: \"kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.752137    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.753199    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.754268    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" path="/var/lib/kubelet/pods/cac915f6-7630-4320-b6d2-fd18f3c19a17/volumes"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.754475    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" path="/var/lib/kubelet/pods/cc7a8d10-408f-4655-ac70-54b4af22d9eb/volumes"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.056772    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.057611    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.059208    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.060512    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: I0916 23:58:40.145054    1676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjkrp\" (UniqueName: \"kubernetes.io/projected/bd03bad4-af1e-42d0-81fb-6fcaeaa8775e-kube-api-access-pjkrp\") pod \"busybox-7b57f96db7-6hrm6\" (UID: \"bd03bad4-af1e-42d0-81fb-6fcaeaa8775e\") " pod="default/busybox-7b57f96db7-6hrm6"
	Sep 16 23:58:41 ha-472903 kubelet[1676]: I0916 23:58:41.754549    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59b9a23c-498d-4802-9790-70931c4a2c06" path="/var/lib/kubelet/pods/59b9a23c-498d-4802-9790-70931c4a2c06/volumes"
	Sep 16 23:58:43 ha-472903 kubelet[1676]: I0916 23:58:43.049200    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7b57f96db7-6hrm6" podStartSLOduration=3.061025393 podStartE2EDuration="5.049179166s" podCreationTimestamp="2025-09-16 23:58:38 +0000 UTC" firstStartedPulling="2025-09-16 23:58:40.45690156 +0000 UTC m=+110.788333264" lastFinishedPulling="2025-09-16 23:58:42.445055322 +0000 UTC m=+112.776487037" observedRunningTime="2025-09-16 23:58:43.049092106 +0000 UTC m=+113.380523828" watchObservedRunningTime="2025-09-16 23:58:43.049179166 +0000 UTC m=+113.380610888"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-472903 -n ha-472903
helpers_test.go:269: (dbg) Run:  kubectl --context ha-472903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-mknzs
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-472903 describe pod busybox-7b57f96db7-mknzs
helpers_test.go:290: (dbg) kubectl --context ha-472903 describe pod busybox-7b57f96db7-mknzs:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-mknzs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ha-472903-m03/192.168.49.4
	Start Time:       Tue, 16 Sep 2025 23:58:37 +0000
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmz92 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gmz92:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                  Age                  From               Message
	  ----     ------                  ----                 ----               -------
	  Warning  FailedScheduling        13m                  default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-mknzs": pod busybox-7b57f96db7-mknzs is already assigned to node "ha-472903-m03"
	  Warning  FailedScheduling        13m                  default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-mknzs": pod busybox-7b57f96db7-mknzs is already assigned to node "ha-472903-m03"
	  Normal   Scheduled               13m                  default-scheduler  Successfully assigned default/busybox-7b57f96db7-mknzs to ha-472903-m03
	  Warning  FailedCreatePodSandBox  13m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "72439adc47052c2da00cee62587d780275cf6c2423dee9831567464d4725ee9d": failed to find network info for sandbox "72439adc47052c2da00cee62587d780275cf6c2423dee9831567464d4725ee9d"
	  Warning  FailedCreatePodSandBox  12m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "24ab8b6bd2f38653d2326c375fc81ebf17317e36885547c7b42c011bb95889ed": failed to find network info for sandbox "24ab8b6bd2f38653d2326c375fc81ebf17317e36885547c7b42c011bb95889ed"
	  Warning  FailedCreatePodSandBox  12m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "300fece4c100bc3e68a19e1fa6f46c8a378753727caaaeb1533dab71f234be58": failed to find network info for sandbox "300fece4c100bc3e68a19e1fa6f46c8a378753727caaaeb1533dab71f234be58"
	  Warning  FailedCreatePodSandBox  12m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e49a14b4de5e24fa450a43c124b2916ad7028d35cbc3b0f74595e68ee161d1d0": failed to find network info for sandbox "e49a14b4de5e24fa450a43c124b2916ad7028d35cbc3b0f74595e68ee161d1d0"
	  Warning  FailedCreatePodSandBox  12m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "efa290ca498f7c70ae29d8d97709edda97bc6b062aac05a3ef6d6a83fbd42797": failed to find network info for sandbox "efa290ca498f7c70ae29d8d97709edda97bc6b062aac05a3ef6d6a83fbd42797"
	  Warning  FailedCreatePodSandBox  12m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d5851ce1270b1c8994400ecd7bdabadaf895488957ffb5173dcd7e289db1de6c": failed to find network info for sandbox "d5851ce1270b1c8994400ecd7bdabadaf895488957ffb5173dcd7e289db1de6c"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "11aaa894ae434b08da8122c8f3445d03b4c1e54dfb071596f63a0e4654f49f10": failed to find network info for sandbox "11aaa894ae434b08da8122c8f3445d03b4c1e54dfb071596f63a0e4654f49f10"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c8126e80126ff891a4935c60cfec55753f6bb51d789c0eb46098b72267c7d53c": failed to find network info for sandbox "c8126e80126ff891a4935c60cfec55753f6bb51d789c0eb46098b72267c7d53c"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1389a2f92f350a6f495c76f80031300b6442a6a0cc67abd4b045ff9150b3fc3a": failed to find network info for sandbox "1389a2f92f350a6f495c76f80031300b6442a6a0cc67abd4b045ff9150b3fc3a"
	  Warning  FailedCreatePodSandBox  3m6s (x38 over 11m)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c3a9afe91461f3ea405980387ac5fab85785c7cf3f180d2b0f894e1df94ca62d": failed to find network info for sandbox "c3a9afe91461f3ea405980387ac5fab85785c7cf3f180d2b0f894e1df94ca62d"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (14.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (67.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-472903 node start m02 --alsologtostderr -v 5: (7.68837435s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5: exit status 7 (680.329776ms)

                                                
                                                
-- stdout --
	ha-472903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:11:56.797370  833815 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:11:56.797673  833815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:11:56.797685  833815 out.go:374] Setting ErrFile to fd 2...
	I0917 00:11:56.797691  833815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:11:56.797903  833815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:11:56.798103  833815 out.go:368] Setting JSON to false
	I0917 00:11:56.798127  833815 mustload.go:65] Loading cluster: ha-472903
	I0917 00:11:56.798248  833815 notify.go:220] Checking for updates...
	I0917 00:11:56.798697  833815 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:11:56.798731  833815 status.go:174] checking status of ha-472903 ...
	I0917 00:11:56.799289  833815 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:11:56.818906  833815 status.go:371] ha-472903 host status = "Running" (err=<nil>)
	I0917 00:11:56.818942  833815 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:11:56.819284  833815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:11:56.837433  833815 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:11:56.837651  833815 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:11:56.837692  833815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:11:56.854187  833815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:11:56.946581  833815 ssh_runner.go:195] Run: systemctl --version
	I0917 00:11:56.950796  833815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:11:56.961777  833815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:11:57.016066  833815 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:11:57.006281494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:11:57.016795  833815 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:11:57.016827  833815 api_server.go:166] Checking apiserver status ...
	I0917 00:11:57.016860  833815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:11:57.029397  833815 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup
	W0917 00:11:57.039074  833815 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:11:57.039136  833815 ssh_runner.go:195] Run: ls
	I0917 00:11:57.042782  833815 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:11:57.046687  833815 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:11:57.046706  833815 status.go:463] ha-472903 apiserver status = Running (err=<nil>)
	I0917 00:11:57.046716  833815 status.go:176] ha-472903 status: &{Name:ha-472903 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:11:57.046730  833815 status.go:174] checking status of ha-472903-m02 ...
	I0917 00:11:57.046992  833815 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:11:57.063662  833815 status.go:371] ha-472903-m02 host status = "Running" (err=<nil>)
	I0917 00:11:57.063682  833815 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:11:57.063935  833815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:11:57.080903  833815 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:11:57.081150  833815 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:11:57.081184  833815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:11:57.097383  833815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:11:57.188266  833815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:11:57.199587  833815 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:11:57.199613  833815 api_server.go:166] Checking apiserver status ...
	I0917 00:11:57.199653  833815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:11:57.209813  833815 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup
	W0917 00:11:57.219381  833815 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:11:57.219450  833815 ssh_runner.go:195] Run: ls
	I0917 00:11:57.222729  833815 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:11:57.226831  833815 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:11:57.226849  833815 status.go:463] ha-472903-m02 apiserver status = Running (err=<nil>)
	I0917 00:11:57.226857  833815 status.go:176] ha-472903-m02 status: &{Name:ha-472903-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:11:57.226872  833815 status.go:174] checking status of ha-472903-m03 ...
	I0917 00:11:57.227129  833815 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:11:57.244557  833815 status.go:371] ha-472903-m03 host status = "Running" (err=<nil>)
	I0917 00:11:57.244575  833815 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:11:57.244842  833815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:11:57.261136  833815 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:11:57.261427  833815 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:11:57.261479  833815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:11:57.278311  833815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:11:57.369475  833815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:11:57.381292  833815 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:11:57.381321  833815 api_server.go:166] Checking apiserver status ...
	I0917 00:11:57.381367  833815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:11:57.393483  833815 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	W0917 00:11:57.403604  833815 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:11:57.403642  833815 ssh_runner.go:195] Run: ls
	I0917 00:11:57.406819  833815 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:11:57.410731  833815 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:11:57.410749  833815 status.go:463] ha-472903-m03 apiserver status = Running (err=<nil>)
	I0917 00:11:57.410758  833815 status.go:176] ha-472903-m03 status: &{Name:ha-472903-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:11:57.410775  833815 status.go:174] checking status of ha-472903-m04 ...
	I0917 00:11:57.411022  833815 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:11:57.428277  833815 status.go:371] ha-472903-m04 host status = "Stopped" (err=<nil>)
	I0917 00:11:57.428293  833815 status.go:384] host is not running, skipping remaining checks
	I0917 00:11:57.428298  833815 status.go:176] ha-472903-m04 status: &{Name:ha-472903-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:11:57.433966  752707 retry.go:31] will retry after 866.199486ms: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5: exit status 7 (689.658881ms)

                                                
                                                
-- stdout --
	ha-472903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:11:58.345160  834035 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:11:58.345304  834035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:11:58.345315  834035 out.go:374] Setting ErrFile to fd 2...
	I0917 00:11:58.345321  834035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:11:58.345537  834035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:11:58.345711  834035 out.go:368] Setting JSON to false
	I0917 00:11:58.345734  834035 mustload.go:65] Loading cluster: ha-472903
	I0917 00:11:58.345859  834035 notify.go:220] Checking for updates...
	I0917 00:11:58.346173  834035 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:11:58.346210  834035 status.go:174] checking status of ha-472903 ...
	I0917 00:11:58.346685  834035 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:11:58.367392  834035 status.go:371] ha-472903 host status = "Running" (err=<nil>)
	I0917 00:11:58.367442  834035 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:11:58.367758  834035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:11:58.384228  834035 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:11:58.384479  834035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:11:58.384522  834035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:11:58.400483  834035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:11:58.492577  834035 ssh_runner.go:195] Run: systemctl --version
	I0917 00:11:58.497139  834035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:11:58.508408  834035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:11:58.562832  834035 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:11:58.553048349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:11:58.563383  834035 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:11:58.563436  834035 api_server.go:166] Checking apiserver status ...
	I0917 00:11:58.563472  834035 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:11:58.575685  834035 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup
	W0917 00:11:58.585230  834035 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:11:58.585275  834035 ssh_runner.go:195] Run: ls
	I0917 00:11:58.588652  834035 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:11:58.592665  834035 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:11:58.592692  834035 status.go:463] ha-472903 apiserver status = Running (err=<nil>)
	I0917 00:11:58.592705  834035 status.go:176] ha-472903 status: &{Name:ha-472903 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:11:58.592725  834035 status.go:174] checking status of ha-472903-m02 ...
	I0917 00:11:58.593064  834035 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:11:58.610372  834035 status.go:371] ha-472903-m02 host status = "Running" (err=<nil>)
	I0917 00:11:58.610396  834035 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:11:58.610670  834035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:11:58.627890  834035 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:11:58.628119  834035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:11:58.628160  834035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:11:58.646462  834035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:11:58.738833  834035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:11:58.750525  834035 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:11:58.750556  834035 api_server.go:166] Checking apiserver status ...
	I0917 00:11:58.750600  834035 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:11:58.761816  834035 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup
	W0917 00:11:58.771302  834035 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:11:58.771350  834035 ssh_runner.go:195] Run: ls
	I0917 00:11:58.775090  834035 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:11:58.780661  834035 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:11:58.780688  834035 status.go:463] ha-472903-m02 apiserver status = Running (err=<nil>)
	I0917 00:11:58.780701  834035 status.go:176] ha-472903-m02 status: &{Name:ha-472903-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:11:58.780730  834035 status.go:174] checking status of ha-472903-m03 ...
	I0917 00:11:58.781004  834035 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:11:58.798504  834035 status.go:371] ha-472903-m03 host status = "Running" (err=<nil>)
	I0917 00:11:58.798521  834035 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:11:58.798783  834035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:11:58.815247  834035 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:11:58.815542  834035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:11:58.815590  834035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:11:58.832799  834035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:11:58.924746  834035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:11:58.936145  834035 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:11:58.936173  834035 api_server.go:166] Checking apiserver status ...
	I0917 00:11:58.936212  834035 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:11:58.947813  834035 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	W0917 00:11:58.957368  834035 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:11:58.957431  834035 ssh_runner.go:195] Run: ls
	I0917 00:11:58.960740  834035 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:11:58.964784  834035 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:11:58.964807  834035 status.go:463] ha-472903-m03 apiserver status = Running (err=<nil>)
	I0917 00:11:58.964818  834035 status.go:176] ha-472903-m03 status: &{Name:ha-472903-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:11:58.964839  834035 status.go:174] checking status of ha-472903-m04 ...
	I0917 00:11:58.965095  834035 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:11:58.984856  834035 status.go:371] ha-472903-m04 host status = "Stopped" (err=<nil>)
	I0917 00:11:58.984875  834035 status.go:384] host is not running, skipping remaining checks
	I0917 00:11:58.984881  834035 status.go:176] ha-472903-m04 status: &{Name:ha-472903-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:11:58.991520  752707 retry.go:31] will retry after 1.398604461s: exit status 7
E0917 00:12:00.234003  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5: exit status 7 (691.212834ms)

                                                
                                                
-- stdout --
	ha-472903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:12:00.432806  834267 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:12:00.433065  834267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:00.433075  834267 out.go:374] Setting ErrFile to fd 2...
	I0917 00:12:00.433082  834267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:00.433290  834267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:12:00.433525  834267 out.go:368] Setting JSON to false
	I0917 00:12:00.433548  834267 mustload.go:65] Loading cluster: ha-472903
	I0917 00:12:00.433664  834267 notify.go:220] Checking for updates...
	I0917 00:12:00.434154  834267 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:12:00.434186  834267 status.go:174] checking status of ha-472903 ...
	I0917 00:12:00.434631  834267 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:12:00.455845  834267 status.go:371] ha-472903 host status = "Running" (err=<nil>)
	I0917 00:12:00.455883  834267 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:12:00.456145  834267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:12:00.473161  834267 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:12:00.473396  834267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:00.473461  834267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:12:00.489998  834267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:12:00.581456  834267 ssh_runner.go:195] Run: systemctl --version
	I0917 00:12:00.585653  834267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:00.597003  834267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:12:00.655287  834267 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:12:00.643496934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:12:00.656132  834267 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:00.656173  834267 api_server.go:166] Checking apiserver status ...
	I0917 00:12:00.656217  834267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:00.669139  834267 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup
	W0917 00:12:00.678851  834267 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:00.678907  834267 ssh_runner.go:195] Run: ls
	I0917 00:12:00.682316  834267 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:00.687899  834267 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:00.687922  834267 status.go:463] ha-472903 apiserver status = Running (err=<nil>)
	I0917 00:12:00.687932  834267 status.go:176] ha-472903 status: &{Name:ha-472903 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:00.687948  834267 status.go:174] checking status of ha-472903-m02 ...
	I0917 00:12:00.688216  834267 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:12:00.705791  834267 status.go:371] ha-472903-m02 host status = "Running" (err=<nil>)
	I0917 00:12:00.705816  834267 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:12:00.706083  834267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:12:00.723941  834267 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:12:00.724204  834267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:00.724261  834267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:12:00.740128  834267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:12:00.832540  834267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:00.844330  834267 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:00.844358  834267 api_server.go:166] Checking apiserver status ...
	I0917 00:12:00.844397  834267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:00.855089  834267 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup
	W0917 00:12:00.864683  834267 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:00.864724  834267 ssh_runner.go:195] Run: ls
	I0917 00:12:00.868008  834267 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:00.872029  834267 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:00.872049  834267 status.go:463] ha-472903-m02 apiserver status = Running (err=<nil>)
	I0917 00:12:00.872059  834267 status.go:176] ha-472903-m02 status: &{Name:ha-472903-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:00.872078  834267 status.go:174] checking status of ha-472903-m03 ...
	I0917 00:12:00.872392  834267 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:12:00.890687  834267 status.go:371] ha-472903-m03 host status = "Running" (err=<nil>)
	I0917 00:12:00.890710  834267 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:12:00.891038  834267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:12:00.907627  834267 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:12:00.907879  834267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:00.907926  834267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:12:00.924333  834267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:12:01.017644  834267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:01.030186  834267 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:01.030214  834267 api_server.go:166] Checking apiserver status ...
	I0917 00:12:01.030245  834267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:01.041362  834267 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	W0917 00:12:01.050486  834267 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:01.050531  834267 ssh_runner.go:195] Run: ls
	I0917 00:12:01.053725  834267 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:01.057711  834267 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:01.057730  834267 status.go:463] ha-472903-m03 apiserver status = Running (err=<nil>)
	I0917 00:12:01.057739  834267 status.go:176] ha-472903-m03 status: &{Name:ha-472903-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:01.057756  834267 status.go:174] checking status of ha-472903-m04 ...
	I0917 00:12:01.058026  834267 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:12:01.076309  834267 status.go:371] ha-472903-m04 host status = "Stopped" (err=<nil>)
	I0917 00:12:01.076328  834267 status.go:384] host is not running, skipping remaining checks
	I0917 00:12:01.076334  834267 status.go:176] ha-472903-m04 status: &{Name:ha-472903-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:12:01.082262  752707 retry.go:31] will retry after 2.910419768s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5: exit status 7 (685.870993ms)

                                                
                                                
-- stdout --
	ha-472903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:12:04.038480  834484 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:12:04.038806  834484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:04.038818  834484 out.go:374] Setting ErrFile to fd 2...
	I0917 00:12:04.038822  834484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:04.039006  834484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:12:04.039178  834484 out.go:368] Setting JSON to false
	I0917 00:12:04.039198  834484 mustload.go:65] Loading cluster: ha-472903
	I0917 00:12:04.039334  834484 notify.go:220] Checking for updates...
	I0917 00:12:04.039609  834484 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:12:04.039631  834484 status.go:174] checking status of ha-472903 ...
	I0917 00:12:04.040068  834484 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:12:04.058623  834484 status.go:371] ha-472903 host status = "Running" (err=<nil>)
	I0917 00:12:04.058671  834484 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:12:04.058945  834484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:12:04.077099  834484 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:12:04.077344  834484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:04.077397  834484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:12:04.094424  834484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:12:04.186356  834484 ssh_runner.go:195] Run: systemctl --version
	I0917 00:12:04.190786  834484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:04.202086  834484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:12:04.256838  834484 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:12:04.247239922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:12:04.257571  834484 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:04.257612  834484 api_server.go:166] Checking apiserver status ...
	I0917 00:12:04.257654  834484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:04.270033  834484 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup
	W0917 00:12:04.279938  834484 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:04.279978  834484 ssh_runner.go:195] Run: ls
	I0917 00:12:04.283742  834484 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:04.289876  834484 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:04.289899  834484 status.go:463] ha-472903 apiserver status = Running (err=<nil>)
	I0917 00:12:04.289911  834484 status.go:176] ha-472903 status: &{Name:ha-472903 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:04.289935  834484 status.go:174] checking status of ha-472903-m02 ...
	I0917 00:12:04.290175  834484 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:12:04.306697  834484 status.go:371] ha-472903-m02 host status = "Running" (err=<nil>)
	I0917 00:12:04.306715  834484 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:12:04.306951  834484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:12:04.324228  834484 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:12:04.324488  834484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:04.324527  834484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:12:04.341238  834484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:12:04.432292  834484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:04.444299  834484 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:04.444329  834484 api_server.go:166] Checking apiserver status ...
	I0917 00:12:04.444367  834484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:04.455250  834484 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup
	W0917 00:12:04.464439  834484 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:04.464494  834484 ssh_runner.go:195] Run: ls
	I0917 00:12:04.467945  834484 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:04.472064  834484 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:04.472089  834484 status.go:463] ha-472903-m02 apiserver status = Running (err=<nil>)
	I0917 00:12:04.472097  834484 status.go:176] ha-472903-m02 status: &{Name:ha-472903-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:04.472113  834484 status.go:174] checking status of ha-472903-m03 ...
	I0917 00:12:04.472348  834484 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:12:04.490392  834484 status.go:371] ha-472903-m03 host status = "Running" (err=<nil>)
	I0917 00:12:04.490429  834484 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:12:04.490672  834484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:12:04.507388  834484 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:12:04.507650  834484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:04.507689  834484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:12:04.524153  834484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:12:04.617717  834484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:04.629777  834484 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:04.629809  834484 api_server.go:166] Checking apiserver status ...
	I0917 00:12:04.629851  834484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:04.641285  834484 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	W0917 00:12:04.650661  834484 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:04.650707  834484 ssh_runner.go:195] Run: ls
	I0917 00:12:04.654184  834484 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:04.658296  834484 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:04.658318  834484 status.go:463] ha-472903-m03 apiserver status = Running (err=<nil>)
	I0917 00:12:04.658328  834484 status.go:176] ha-472903-m03 status: &{Name:ha-472903-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:04.658347  834484 status.go:174] checking status of ha-472903-m04 ...
	I0917 00:12:04.658607  834484 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:12:04.675984  834484 status.go:371] ha-472903-m04 host status = "Stopped" (err=<nil>)
	I0917 00:12:04.676003  834484 status.go:384] host is not running, skipping remaining checks
	I0917 00:12:04.676011  834484 status.go:176] ha-472903-m04 status: &{Name:ha-472903-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:12:04.682008  752707 retry.go:31] will retry after 3.229017235s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5: exit status 7 (695.150228ms)

                                                
                                                
-- stdout --
	ha-472903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:12:07.962778  834738 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:12:07.962871  834738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:07.962879  834738 out.go:374] Setting ErrFile to fd 2...
	I0917 00:12:07.962884  834738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:07.963053  834738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:12:07.963246  834738 out.go:368] Setting JSON to false
	I0917 00:12:07.963268  834738 mustload.go:65] Loading cluster: ha-472903
	I0917 00:12:07.963338  834738 notify.go:220] Checking for updates...
	I0917 00:12:07.963685  834738 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:12:07.963710  834738 status.go:174] checking status of ha-472903 ...
	I0917 00:12:07.964222  834738 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:12:07.982511  834738 status.go:371] ha-472903 host status = "Running" (err=<nil>)
	I0917 00:12:07.982564  834738 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:12:07.982865  834738 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:12:08.000302  834738 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:12:08.000651  834738 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:08.000701  834738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:12:08.019130  834738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:12:08.113567  834738 ssh_runner.go:195] Run: systemctl --version
	I0917 00:12:08.117745  834738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:08.129189  834738 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:12:08.183249  834738 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:12:08.17319371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:12:08.183830  834738 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:08.183869  834738 api_server.go:166] Checking apiserver status ...
	I0917 00:12:08.183911  834738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:08.195707  834738 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup
	W0917 00:12:08.205014  834738 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:08.205073  834738 ssh_runner.go:195] Run: ls
	I0917 00:12:08.208527  834738 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:08.212721  834738 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:08.212746  834738 status.go:463] ha-472903 apiserver status = Running (err=<nil>)
	I0917 00:12:08.212759  834738 status.go:176] ha-472903 status: &{Name:ha-472903 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:08.212801  834738 status.go:174] checking status of ha-472903-m02 ...
	I0917 00:12:08.213119  834738 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:12:08.231098  834738 status.go:371] ha-472903-m02 host status = "Running" (err=<nil>)
	I0917 00:12:08.231122  834738 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:12:08.231356  834738 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:12:08.247767  834738 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:12:08.248077  834738 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:08.248128  834738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:12:08.265841  834738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:12:08.358759  834738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:08.370602  834738 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:08.370636  834738 api_server.go:166] Checking apiserver status ...
	I0917 00:12:08.370678  834738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:08.381692  834738 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup
	W0917 00:12:08.390731  834738 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:08.390780  834738 ssh_runner.go:195] Run: ls
	I0917 00:12:08.394132  834738 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:08.398109  834738 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:08.398130  834738 status.go:463] ha-472903-m02 apiserver status = Running (err=<nil>)
	I0917 00:12:08.398139  834738 status.go:176] ha-472903-m02 status: &{Name:ha-472903-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:08.398160  834738 status.go:174] checking status of ha-472903-m03 ...
	I0917 00:12:08.398394  834738 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:12:08.415644  834738 status.go:371] ha-472903-m03 host status = "Running" (err=<nil>)
	I0917 00:12:08.415666  834738 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:12:08.415942  834738 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:12:08.433976  834738 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:12:08.434280  834738 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:08.434327  834738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:12:08.451459  834738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:12:08.543387  834738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:08.555410  834738 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:08.555457  834738 api_server.go:166] Checking apiserver status ...
	I0917 00:12:08.555497  834738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:08.566025  834738 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	W0917 00:12:08.575772  834738 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:08.575816  834738 ssh_runner.go:195] Run: ls
	I0917 00:12:08.579287  834738 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:08.583270  834738 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:08.583294  834738 status.go:463] ha-472903-m03 apiserver status = Running (err=<nil>)
	I0917 00:12:08.583306  834738 status.go:176] ha-472903-m03 status: &{Name:ha-472903-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:08.583325  834738 status.go:174] checking status of ha-472903-m04 ...
	I0917 00:12:08.583589  834738 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:12:08.601341  834738 status.go:371] ha-472903-m04 host status = "Stopped" (err=<nil>)
	I0917 00:12:08.601366  834738 status.go:384] host is not running, skipping remaining checks
	I0917 00:12:08.601372  834738 status.go:176] ha-472903-m04 status: &{Name:ha-472903-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:12:08.607540  752707 retry.go:31] will retry after 3.42372154s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5: exit status 7 (682.336365ms)

                                                
                                                
-- stdout --
	ha-472903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:12:12.075300  835038 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:12:12.075641  835038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:12.075654  835038 out.go:374] Setting ErrFile to fd 2...
	I0917 00:12:12.075659  835038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:12.075895  835038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:12:12.076067  835038 out.go:368] Setting JSON to false
	I0917 00:12:12.076091  835038 mustload.go:65] Loading cluster: ha-472903
	I0917 00:12:12.076171  835038 notify.go:220] Checking for updates...
	I0917 00:12:12.076685  835038 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:12:12.076721  835038 status.go:174] checking status of ha-472903 ...
	I0917 00:12:12.077280  835038 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:12:12.096290  835038 status.go:371] ha-472903 host status = "Running" (err=<nil>)
	I0917 00:12:12.096313  835038 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:12:12.096572  835038 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:12:12.113243  835038 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:12:12.113605  835038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:12.113657  835038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:12:12.129681  835038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:12:12.221366  835038 ssh_runner.go:195] Run: systemctl --version
	I0917 00:12:12.225816  835038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:12.237489  835038 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:12:12.292654  835038 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:12:12.282514549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:12:12.293218  835038 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:12.293261  835038 api_server.go:166] Checking apiserver status ...
	I0917 00:12:12.293319  835038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:12.305282  835038 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup
	W0917 00:12:12.315188  835038 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:12.315241  835038 ssh_runner.go:195] Run: ls
	I0917 00:12:12.318908  835038 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:12.323045  835038 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:12.323066  835038 status.go:463] ha-472903 apiserver status = Running (err=<nil>)
	I0917 00:12:12.323077  835038 status.go:176] ha-472903 status: &{Name:ha-472903 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:12.323096  835038 status.go:174] checking status of ha-472903-m02 ...
	I0917 00:12:12.323356  835038 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:12:12.340154  835038 status.go:371] ha-472903-m02 host status = "Running" (err=<nil>)
	I0917 00:12:12.340175  835038 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:12:12.340422  835038 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:12:12.357064  835038 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:12:12.357306  835038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:12.357342  835038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:12:12.373181  835038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:12:12.467393  835038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:12.479716  835038 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:12.479747  835038 api_server.go:166] Checking apiserver status ...
	I0917 00:12:12.479788  835038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:12.490747  835038 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup
	W0917 00:12:12.499932  835038 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:12.499982  835038 ssh_runner.go:195] Run: ls
	I0917 00:12:12.503479  835038 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:12.507649  835038 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:12.507670  835038 status.go:463] ha-472903-m02 apiserver status = Running (err=<nil>)
	I0917 00:12:12.507679  835038 status.go:176] ha-472903-m02 status: &{Name:ha-472903-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:12.507692  835038 status.go:174] checking status of ha-472903-m03 ...
	I0917 00:12:12.507937  835038 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:12:12.525156  835038 status.go:371] ha-472903-m03 host status = "Running" (err=<nil>)
	I0917 00:12:12.525177  835038 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:12:12.525442  835038 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:12:12.543138  835038 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:12:12.543408  835038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:12.543477  835038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:12:12.559968  835038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:12:12.651448  835038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:12.663478  835038 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:12.663504  835038 api_server.go:166] Checking apiserver status ...
	I0917 00:12:12.663533  835038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:12.674678  835038 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	W0917 00:12:12.684330  835038 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:12.684368  835038 ssh_runner.go:195] Run: ls
	I0917 00:12:12.687771  835038 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:12.691797  835038 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:12.691817  835038 status.go:463] ha-472903-m03 apiserver status = Running (err=<nil>)
	I0917 00:12:12.691829  835038 status.go:176] ha-472903-m03 status: &{Name:ha-472903-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:12.691859  835038 status.go:174] checking status of ha-472903-m04 ...
	I0917 00:12:12.692132  835038 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:12:12.709277  835038 status.go:371] ha-472903-m04 host status = "Stopped" (err=<nil>)
	I0917 00:12:12.709300  835038 status.go:384] host is not running, skipping remaining checks
	I0917 00:12:12.709308  835038 status.go:176] ha-472903-m04 status: &{Name:ha-472903-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:12:12.715325  752707 retry.go:31] will retry after 8.919960077s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5: exit status 7 (701.065007ms)

                                                
                                                
-- stdout --
	ha-472903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:12:21.680108  835311 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:12:21.680396  835311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:21.680408  835311 out.go:374] Setting ErrFile to fd 2...
	I0917 00:12:21.680444  835311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:21.680717  835311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:12:21.680932  835311 out.go:368] Setting JSON to false
	I0917 00:12:21.680957  835311 mustload.go:65] Loading cluster: ha-472903
	I0917 00:12:21.681081  835311 notify.go:220] Checking for updates...
	I0917 00:12:21.681521  835311 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:12:21.681557  835311 status.go:174] checking status of ha-472903 ...
	I0917 00:12:21.682185  835311 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:12:21.701522  835311 status.go:371] ha-472903 host status = "Running" (err=<nil>)
	I0917 00:12:21.701556  835311 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:12:21.701813  835311 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:12:21.720316  835311 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:12:21.720584  835311 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:21.720624  835311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:12:21.738217  835311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:12:21.832767  835311 ssh_runner.go:195] Run: systemctl --version
	I0917 00:12:21.837577  835311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:21.849443  835311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:12:21.905914  835311 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:12:21.895453548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:12:21.906525  835311 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:21.906561  835311 api_server.go:166] Checking apiserver status ...
	I0917 00:12:21.906607  835311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:21.918775  835311 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup
	W0917 00:12:21.928790  835311 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:21.928835  835311 ssh_runner.go:195] Run: ls
	I0917 00:12:21.932421  835311 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:21.936686  835311 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:21.936708  835311 status.go:463] ha-472903 apiserver status = Running (err=<nil>)
	I0917 00:12:21.936722  835311 status.go:176] ha-472903 status: &{Name:ha-472903 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:21.936741  835311 status.go:174] checking status of ha-472903-m02 ...
	I0917 00:12:21.936976  835311 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:12:21.955034  835311 status.go:371] ha-472903-m02 host status = "Running" (err=<nil>)
	I0917 00:12:21.955069  835311 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:12:21.955427  835311 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:12:21.972310  835311 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:12:21.972605  835311 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:21.972660  835311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:12:21.991040  835311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:12:22.084605  835311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:22.096649  835311 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:22.096679  835311 api_server.go:166] Checking apiserver status ...
	I0917 00:12:22.096716  835311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:22.107558  835311 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup
	W0917 00:12:22.117155  835311 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:22.117199  835311 ssh_runner.go:195] Run: ls
	I0917 00:12:22.120903  835311 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:22.125053  835311 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:22.125078  835311 status.go:463] ha-472903-m02 apiserver status = Running (err=<nil>)
	I0917 00:12:22.125090  835311 status.go:176] ha-472903-m02 status: &{Name:ha-472903-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:22.125149  835311 status.go:174] checking status of ha-472903-m03 ...
	I0917 00:12:22.125515  835311 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:12:22.143982  835311 status.go:371] ha-472903-m03 host status = "Running" (err=<nil>)
	I0917 00:12:22.144006  835311 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:12:22.144307  835311 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:12:22.161259  835311 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:12:22.161613  835311 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:22.161673  835311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:12:22.178227  835311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:12:22.270886  835311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:22.283843  835311 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:22.283875  835311 api_server.go:166] Checking apiserver status ...
	I0917 00:12:22.283917  835311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:22.294962  835311 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	W0917 00:12:22.304685  835311 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:22.304734  835311 ssh_runner.go:195] Run: ls
	I0917 00:12:22.308154  835311 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:22.312429  835311 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:22.312454  835311 status.go:463] ha-472903-m03 apiserver status = Running (err=<nil>)
	I0917 00:12:22.312464  835311 status.go:176] ha-472903-m03 status: &{Name:ha-472903-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:22.312494  835311 status.go:174] checking status of ha-472903-m04 ...
	I0917 00:12:22.312812  835311 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:12:22.332291  835311 status.go:371] ha-472903-m04 host status = "Stopped" (err=<nil>)
	I0917 00:12:22.332313  835311 status.go:384] host is not running, skipping remaining checks
	I0917 00:12:22.332322  835311 status.go:176] ha-472903-m04 status: &{Name:ha-472903-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:12:22.338375  752707 retry.go:31] will retry after 7.586412275s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5: exit status 7 (689.77437ms)

                                                
                                                
-- stdout --
	ha-472903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:12:29.970534  835565 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:12:29.970797  835565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:29.970805  835565 out.go:374] Setting ErrFile to fd 2...
	I0917 00:12:29.970810  835565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:29.970988  835565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:12:29.971144  835565 out.go:368] Setting JSON to false
	I0917 00:12:29.971166  835565 mustload.go:65] Loading cluster: ha-472903
	I0917 00:12:29.971276  835565 notify.go:220] Checking for updates...
	I0917 00:12:29.971568  835565 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:12:29.971601  835565 status.go:174] checking status of ha-472903 ...
	I0917 00:12:29.972141  835565 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:12:29.990279  835565 status.go:371] ha-472903 host status = "Running" (err=<nil>)
	I0917 00:12:29.990302  835565 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:12:29.990574  835565 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:12:30.008377  835565 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:12:30.008664  835565 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:30.008720  835565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:12:30.025609  835565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:12:30.119686  835565 ssh_runner.go:195] Run: systemctl --version
	I0917 00:12:30.124039  835565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:30.135851  835565 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:12:30.189214  835565 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:12:30.179042077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:12:30.189823  835565 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:30.189855  835565 api_server.go:166] Checking apiserver status ...
	I0917 00:12:30.189893  835565 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:30.201743  835565 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup
	W0917 00:12:30.211593  835565 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:30.211643  835565 ssh_runner.go:195] Run: ls
	I0917 00:12:30.214931  835565 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:30.219458  835565 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:30.219484  835565 status.go:463] ha-472903 apiserver status = Running (err=<nil>)
	I0917 00:12:30.219496  835565 status.go:176] ha-472903 status: &{Name:ha-472903 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:30.219518  835565 status.go:174] checking status of ha-472903-m02 ...
	I0917 00:12:30.219746  835565 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:12:30.237383  835565 status.go:371] ha-472903-m02 host status = "Running" (err=<nil>)
	I0917 00:12:30.237438  835565 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:12:30.237725  835565 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:12:30.254733  835565 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:12:30.254996  835565 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:30.255033  835565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:12:30.273147  835565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:12:30.367546  835565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:30.379851  835565 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:30.379877  835565 api_server.go:166] Checking apiserver status ...
	I0917 00:12:30.379920  835565 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:30.390742  835565 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup
	W0917 00:12:30.400021  835565 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:30.400062  835565 ssh_runner.go:195] Run: ls
	I0917 00:12:30.403396  835565 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:30.407388  835565 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:30.407421  835565 status.go:463] ha-472903-m02 apiserver status = Running (err=<nil>)
	I0917 00:12:30.407433  835565 status.go:176] ha-472903-m02 status: &{Name:ha-472903-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:30.407458  835565 status.go:174] checking status of ha-472903-m03 ...
	I0917 00:12:30.407774  835565 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:12:30.424965  835565 status.go:371] ha-472903-m03 host status = "Running" (err=<nil>)
	I0917 00:12:30.424983  835565 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:12:30.425201  835565 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:12:30.441828  835565 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:12:30.442081  835565 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:30.442123  835565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:12:30.459171  835565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:12:30.551408  835565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:30.563617  835565 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:30.563645  835565 api_server.go:166] Checking apiserver status ...
	I0917 00:12:30.563677  835565 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:30.574786  835565 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	W0917 00:12:30.584519  835565 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:30.584573  835565 ssh_runner.go:195] Run: ls
	I0917 00:12:30.587971  835565 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:30.591981  835565 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:30.592002  835565 status.go:463] ha-472903-m03 apiserver status = Running (err=<nil>)
	I0917 00:12:30.592011  835565 status.go:176] ha-472903-m03 status: &{Name:ha-472903-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:30.592027  835565 status.go:174] checking status of ha-472903-m04 ...
	I0917 00:12:30.592245  835565 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:12:30.611001  835565 status.go:371] ha-472903-m04 host status = "Stopped" (err=<nil>)
	I0917 00:12:30.611019  835565 status.go:384] host is not running, skipping remaining checks
	I0917 00:12:30.611025  835565 status.go:176] ha-472903-m04 status: &{Name:ha-472903-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:12:30.617154  752707 retry.go:31] will retry after 23.033447715s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5: exit status 7 (700.674965ms)

                                                
                                                
-- stdout --
	ha-472903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:12:53.697017  835976 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:12:53.697150  835976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:53.697160  835976 out.go:374] Setting ErrFile to fd 2...
	I0917 00:12:53.697165  835976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:53.697395  835976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:12:53.697620  835976 out.go:368] Setting JSON to false
	I0917 00:12:53.697643  835976 mustload.go:65] Loading cluster: ha-472903
	I0917 00:12:53.697754  835976 notify.go:220] Checking for updates...
	I0917 00:12:53.698072  835976 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:12:53.698095  835976 status.go:174] checking status of ha-472903 ...
	I0917 00:12:53.698585  835976 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:12:53.718240  835976 status.go:371] ha-472903 host status = "Running" (err=<nil>)
	I0917 00:12:53.718275  835976 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:12:53.718580  835976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:12:53.735692  835976 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:12:53.735988  835976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:53.736039  835976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:12:53.754855  835976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:12:53.847849  835976 ssh_runner.go:195] Run: systemctl --version
	I0917 00:12:53.852170  835976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:53.864138  835976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:12:53.919206  835976 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:12:53.910304694 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:12:53.919928  835976 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:53.919966  835976 api_server.go:166] Checking apiserver status ...
	I0917 00:12:53.920018  835976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:53.932079  835976 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup
	W0917 00:12:53.941966  835976 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:53.942027  835976 ssh_runner.go:195] Run: ls
	I0917 00:12:53.945601  835976 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:53.951299  835976 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:53.951320  835976 status.go:463] ha-472903 apiserver status = Running (err=<nil>)
	I0917 00:12:53.951333  835976 status.go:176] ha-472903 status: &{Name:ha-472903 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:53.951357  835976 status.go:174] checking status of ha-472903-m02 ...
	I0917 00:12:53.951689  835976 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:12:53.969217  835976 status.go:371] ha-472903-m02 host status = "Running" (err=<nil>)
	I0917 00:12:53.969237  835976 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:12:53.969496  835976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:12:53.986220  835976 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:12:53.986553  835976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:53.986610  835976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:12:54.004294  835976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:12:54.099509  835976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:54.112313  835976 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:54.112345  835976 api_server.go:166] Checking apiserver status ...
	I0917 00:12:54.112385  835976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:54.123286  835976 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup
	W0917 00:12:54.133158  835976 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/677/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:54.133213  835976 ssh_runner.go:195] Run: ls
	I0917 00:12:54.136878  835976 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:54.142030  835976 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:54.142052  835976 status.go:463] ha-472903-m02 apiserver status = Running (err=<nil>)
	I0917 00:12:54.142061  835976 status.go:176] ha-472903-m02 status: &{Name:ha-472903-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:54.142084  835976 status.go:174] checking status of ha-472903-m03 ...
	I0917 00:12:54.142334  835976 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:12:54.160189  835976 status.go:371] ha-472903-m03 host status = "Running" (err=<nil>)
	I0917 00:12:54.160211  835976 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:12:54.160512  835976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:12:54.177319  835976 host.go:66] Checking if "ha-472903-m03" exists ...
	I0917 00:12:54.177590  835976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:12:54.177639  835976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:12:54.195781  835976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:12:54.288648  835976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:12:54.300464  835976 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:12:54.300495  835976 api_server.go:166] Checking apiserver status ...
	I0917 00:12:54.300543  835976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:12:54.311552  835976 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	W0917 00:12:54.322234  835976 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:12:54.322288  835976 ssh_runner.go:195] Run: ls
	I0917 00:12:54.325823  835976 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:12:54.329993  835976 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:12:54.330020  835976 status.go:463] ha-472903-m03 apiserver status = Running (err=<nil>)
	I0917 00:12:54.330028  835976 status.go:176] ha-472903-m03 status: &{Name:ha-472903-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:12:54.330043  835976 status.go:174] checking status of ha-472903-m04 ...
	I0917 00:12:54.330273  835976 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:12:54.347864  835976 status.go:371] ha-472903-m04 host status = "Stopped" (err=<nil>)
	I0917 00:12:54.347883  835976 status.go:384] host is not running, skipping remaining checks
	I0917 00:12:54.347891  835976 status.go:176] ha-472903-m04 status: &{Name:ha-472903-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-472903
helpers_test.go:243: (dbg) docker inspect ha-472903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	        "Created": "2025-09-16T23:56:35.178831158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 804802,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:56:35.209552026Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hostname",
	        "HostsPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hosts",
	        "LogPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047-json.log",
	        "Name": "/ha-472903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-472903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-472903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	                "LowerDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-472903",
	                "Source": "/var/lib/docker/volumes/ha-472903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-472903",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-472903",
	                "name.minikube.sigs.k8s.io": "ha-472903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "abe382ce28757e80b5cdae91a64217d3672b21c23f3517480bd53105aeca147e",
	            "SandboxKey": "/var/run/docker/netns/abe382ce2875",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33544"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33545"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33548"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33546"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33547"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-472903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:42:9f:f6:50:c2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "22d49b2f397dfabc2a3967bd54b05204a52976e683f65ff07bff00e793040bef",
	                    "EndpointID": "4d4d83129a167c8183e8ef58cc6057f613d8d69adf59710ba6c623d1ff2970c6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-472903",
	                        "05f03528ecc5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-472903 -n ha-472903
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-472903 logs -n 25: (1.117950943s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ cp      │ ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903:/home/docker/cp-test_ha-472903-m03_ha-472903.txt                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903 sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903.txt                                                 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ cp      │ ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903-m02:/home/docker/cp-test_ha-472903-m03_ha-472903-m02.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m02 sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m02.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ cp      │ ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903-m04:/home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp testdata/cp-test.txt ha-472903-m04:/home/docker/cp-test.txt                                                             │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970888916/001/cp-test_ha-472903-m04.txt │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903:/home/docker/cp-test_ha-472903-m04_ha-472903.txt                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903.txt                                                 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m02:/home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m02 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m03:/home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ node    │ ha-472903 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ node    │ ha-472903 node start m02 --alsologtostderr -v 5                                                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:56:30
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:56:30.301112  804231 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:56:30.301322  804231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:30.301330  804231 out.go:374] Setting ErrFile to fd 2...
	I0916 23:56:30.301335  804231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:30.301535  804231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0916 23:56:30.302024  804231 out.go:368] Setting JSON to false
	I0916 23:56:30.302925  804231 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9532,"bootTime":1758057458,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:56:30.303027  804231 start.go:140] virtualization: kvm guest
	I0916 23:56:30.304965  804231 out.go:179] * [ha-472903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:56:30.306181  804231 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:56:30.306189  804231 notify.go:220] Checking for updates...
	I0916 23:56:30.308309  804231 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:56:30.309530  804231 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0916 23:56:30.310577  804231 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0916 23:56:30.311523  804231 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:56:30.312490  804231 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:56:30.313634  804231 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:56:30.336203  804231 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:56:30.336330  804231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:30.390690  804231 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:56:30.380521507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:30.390801  804231 docker.go:318] overlay module found
	I0916 23:56:30.392435  804231 out.go:179] * Using the docker driver based on user configuration
	I0916 23:56:30.393493  804231 start.go:304] selected driver: docker
	I0916 23:56:30.393505  804231 start.go:918] validating driver "docker" against <nil>
	I0916 23:56:30.393517  804231 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:56:30.394092  804231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:30.448140  804231 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:56:30.438500908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:30.448302  804231 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:56:30.448529  804231 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:56:30.450143  804231 out.go:179] * Using Docker driver with root privileges
	I0916 23:56:30.451156  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:30.451216  804231 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 23:56:30.451226  804231 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:56:30.451301  804231 start.go:348] cluster config:
	{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m
0s}
	I0916 23:56:30.452491  804231 out.go:179] * Starting "ha-472903" primary control-plane node in "ha-472903" cluster
	I0916 23:56:30.453469  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:56:30.454617  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:30.455626  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:30.455658  804231 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0916 23:56:30.455669  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:30.455737  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:30.455747  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:30.455875  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:56:30.456208  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:30.456245  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json: {Name:mkb16495f6ef626fa58a9600f3b4a943b5aaf14d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:30.475568  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:30.475587  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:30.475611  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:30.475644  804231 start.go:360] acquireMachinesLock for ha-472903: {Name:mk994658ce3314f2aed1dec341debc49d36a4326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:30.475759  804231 start.go:364] duration metric: took 97.738µs to acquireMachinesLock for "ha-472903"
	I0916 23:56:30.475786  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:30.475881  804231 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:56:30.477680  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:30.477953  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:56:30.477986  804231 client.go:168] LocalClient.Create starting
	I0916 23:56:30.478060  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:56:30.478097  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:30.478118  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:30.478203  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:56:30.478234  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:30.478247  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:30.478706  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:56:30.494743  804231 cli_runner.go:211] docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:56:30.494806  804231 network_create.go:284] running [docker network inspect ha-472903] to gather additional debugging logs...
	I0916 23:56:30.494829  804231 cli_runner.go:164] Run: docker network inspect ha-472903
	W0916 23:56:30.510851  804231 cli_runner.go:211] docker network inspect ha-472903 returned with exit code 1
	I0916 23:56:30.510886  804231 network_create.go:287] error running [docker network inspect ha-472903]: docker network inspect ha-472903: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-472903 not found
	I0916 23:56:30.510919  804231 network_create.go:289] output of [docker network inspect ha-472903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-472903 not found
	
	** /stderr **
	I0916 23:56:30.511007  804231 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:30.527272  804231 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b12870}
	I0916 23:56:30.527312  804231 network_create.go:124] attempt to create docker network ha-472903 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:56:30.527357  804231 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-472903 ha-472903
	I0916 23:56:30.581246  804231 network_create.go:108] docker network ha-472903 192.168.49.0/24 created
	I0916 23:56:30.581278  804231 kic.go:121] calculated static IP "192.168.49.2" for the "ha-472903" container
	I0916 23:56:30.581331  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:30.597113  804231 cli_runner.go:164] Run: docker volume create ha-472903 --label name.minikube.sigs.k8s.io=ha-472903 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:30.614615  804231 oci.go:103] Successfully created a docker volume ha-472903
	I0916 23:56:30.614694  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903 --entrypoint /usr/bin/test -v ha-472903:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:30.983301  804231 oci.go:107] Successfully prepared a docker volume ha-472903
	I0916 23:56:30.983346  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:30.983369  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:30.983457  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:56:35.109877  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.126378793s)
	I0916 23:56:35.109930  804231 kic.go:203] duration metric: took 4.126557088s to extract preloaded images to volume ...
	W0916 23:56:35.110010  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:56:35.110041  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:56:35.110081  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:56:35.162423  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903 --name ha-472903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903 --network ha-472903 --ip 192.168.49.2 --volume ha-472903:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:56:35.411448  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Running}}
	I0916 23:56:35.428877  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.447492  804231 cli_runner.go:164] Run: docker exec ha-472903 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:56:35.490145  804231 oci.go:144] the created container "ha-472903" has a running status.
	I0916 23:56:35.490177  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa...
	I0916 23:56:35.748917  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:56:35.748974  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:56:35.776040  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.795374  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:56:35.795403  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:56:35.841194  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:35.859165  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:56:35.859278  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:35.877348  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:35.877637  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:35.877654  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:56:36.014327  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0916 23:56:36.014356  804231 ubuntu.go:182] provisioning hostname "ha-472903"
	I0916 23:56:36.014430  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.033295  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:36.033543  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:36.033558  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903 && echo "ha-472903" | sudo tee /etc/hostname
	I0916 23:56:36.178557  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0916 23:56:36.178627  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.196584  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:56:36.196791  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I0916 23:56:36.196814  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:56:36.331895  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:56:36.331954  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:56:36.331987  804231 ubuntu.go:190] setting up certificates
	I0916 23:56:36.332000  804231 provision.go:84] configureAuth start
	I0916 23:56:36.332062  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.350923  804231 provision.go:143] copyHostCerts
	I0916 23:56:36.350968  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:56:36.351011  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:56:36.351021  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:56:36.351100  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:56:36.351216  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:56:36.351254  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:56:36.351265  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:56:36.351307  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:56:36.351374  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:56:36.351400  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:56:36.351409  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:56:36.351461  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:56:36.351538  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903 san=[127.0.0.1 192.168.49.2 ha-472903 localhost minikube]
	I0916 23:56:36.406870  804231 provision.go:177] copyRemoteCerts
	I0916 23:56:36.406927  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:56:36.406977  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.424064  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.520663  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:56:36.520737  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:56:36.546100  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:56:36.546162  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 23:56:36.569886  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:56:36.569946  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:56:36.593694  804231 provision.go:87] duration metric: took 261.676108ms to configureAuth
	I0916 23:56:36.593725  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:56:36.593891  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:36.593903  804231 machine.go:96] duration metric: took 734.71199ms to provisionDockerMachine
	I0916 23:56:36.593911  804231 client.go:171] duration metric: took 6.115914604s to LocalClient.Create
	I0916 23:56:36.593933  804231 start.go:167] duration metric: took 6.115991162s to libmachine.API.Create "ha-472903"
	I0916 23:56:36.593942  804231 start.go:293] postStartSetup for "ha-472903" (driver="docker")
	I0916 23:56:36.593950  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:56:36.593994  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:56:36.594038  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.611127  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.708294  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:56:36.711629  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:56:36.711662  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:56:36.711669  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:56:36.711677  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:56:36.711690  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:56:36.711734  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:56:36.711817  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:56:36.711829  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:56:36.711917  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:56:36.720521  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:56:36.746614  804231 start.go:296] duration metric: took 152.657806ms for postStartSetup
	I0916 23:56:36.746970  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.763912  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:36.764159  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:56:36.764204  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.781099  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.872372  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:56:36.876670  804231 start.go:128] duration metric: took 6.400768235s to createHost
	I0916 23:56:36.876701  804231 start.go:83] releasing machines lock for "ha-472903", held for 6.400928988s
	I0916 23:56:36.876787  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0916 23:56:36.894080  804231 ssh_runner.go:195] Run: cat /version.json
	I0916 23:56:36.894094  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:56:36.894141  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.894182  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:36.912628  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:36.913001  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:37.079386  804231 ssh_runner.go:195] Run: systemctl --version
	I0916 23:56:37.084104  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:56:37.088563  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:56:37.116786  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:56:37.116846  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:56:37.142716  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:56:37.142738  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:56:37.142772  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:56:37.142832  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:56:37.154693  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:56:37.165920  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:56:37.165978  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:56:37.179227  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:56:37.192751  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:56:37.255915  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:56:37.324761  804231 docker.go:234] disabling docker service ...
	I0916 23:56:37.324836  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:56:37.342233  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:56:37.353324  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:56:37.420555  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:56:37.486396  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:56:37.497453  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:56:37.513435  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:56:37.524399  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:56:37.534072  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:56:37.534132  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:56:37.543872  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:56:37.553478  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:56:37.562918  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:56:37.572431  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:56:37.581176  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:56:37.590540  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:56:37.599825  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:56:37.609340  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:56:37.617500  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:56:37.625771  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:56:37.685687  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:56:37.787201  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:56:37.787275  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:56:37.791126  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:56:37.791200  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:56:37.794684  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:56:37.828753  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:56:37.828806  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:56:37.851610  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:56:37.876577  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:56:37.877711  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:37.894044  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:56:37.897995  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:56:37.909702  804231 kubeadm.go:875] updating cluster {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:56:37.909830  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:37.909936  804231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:56:37.943964  804231 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 23:56:37.943985  804231 containerd.go:534] Images already preloaded, skipping extraction
	I0916 23:56:37.944040  804231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:56:37.976374  804231 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 23:56:37.976397  804231 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:56:37.976405  804231 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0916 23:56:37.976525  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:56:37.976590  804231 ssh_runner.go:195] Run: sudo crictl info
	I0916 23:56:38.009585  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:38.009608  804231 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:56:38.009620  804231 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:56:38.009642  804231 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-472903 NodeName:ha-472903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:56:38.009740  804231 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-472903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:56:38.009763  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:56:38.009799  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:56:38.022796  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:56:38.022978  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:56:38.023041  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:56:38.032162  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:56:38.032241  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 23:56:38.040936  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 23:56:38.058672  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:56:38.079097  804231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0916 23:56:38.097183  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0916 23:56:38.116629  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:56:38.120221  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:56:38.131205  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:56:38.195735  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:56:38.216649  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.2
	I0916 23:56:38.216671  804231 certs.go:194] generating shared ca certs ...
	I0916 23:56:38.216692  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.216854  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:56:38.216907  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:56:38.216920  804231 certs.go:256] generating profile certs ...
	I0916 23:56:38.216989  804231 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:56:38.217007  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt with IP's: []
	I0916 23:56:38.286683  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt ...
	I0916 23:56:38.286713  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt: {Name:mk764ef4ac73429cea14d799835f3822d8afb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.286876  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key ...
	I0916 23:56:38.286887  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key: {Name:mk988f40b7ad20c61b4ffc19afd15eea50787a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.286965  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8
	I0916 23:56:38.286981  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0916 23:56:38.411782  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 ...
	I0916 23:56:38.411812  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8: {Name:mkbca9fcc4cd73eb913b43ef67240975ba048601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.411977  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8 ...
	I0916 23:56:38.411990  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8: {Name:mk56f7fb29011c6372caaf96dfdbcab1b202e8b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.412061  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.ef70afe8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:56:38.412138  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.ef70afe8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:56:38.412190  804231 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:56:38.412204  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt with IP's: []
	I0916 23:56:38.735728  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt ...
	I0916 23:56:38.735759  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt: {Name:mke25602938652bbe51197bb8e5738dfc5dca50b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.735935  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key ...
	I0916 23:56:38.735947  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key: {Name:mkc7d616357a8be8181d43ca8cb33ab512ce94dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:38.736027  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:56:38.736044  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:56:38.736055  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:56:38.736068  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:56:38.736078  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:56:38.736090  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:56:38.736105  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:56:38.736115  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:56:38.736175  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:56:38.736210  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:56:38.736218  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:56:38.736242  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:56:38.736266  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:56:38.736284  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:56:38.736322  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:56:38.736347  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:56:38.736360  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:38.736372  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:56:38.736905  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:56:38.762142  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:56:38.786590  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:56:38.810694  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:56:38.834521  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 23:56:38.858677  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:56:38.881975  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:56:38.906146  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:56:38.929698  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:56:38.955154  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:56:38.978551  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:56:39.001782  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:56:39.019405  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:56:39.024868  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:56:39.034165  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.038348  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.038407  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:56:39.045172  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:56:39.054735  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:56:39.065180  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.068976  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.069038  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:56:39.075920  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:56:39.085838  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:56:39.095394  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.098966  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.099019  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:56:39.105643  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:56:39.114800  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:56:39.117988  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:56:39.118033  804231 kubeadm.go:392] StartCluster: {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:56:39.118097  804231 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 23:56:39.118132  804231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 23:56:39.154291  804231 cri.go:89] found id: ""
	I0916 23:56:39.154361  804231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:56:39.163485  804231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:56:39.172454  804231 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:56:39.172499  804231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:56:39.181066  804231 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:56:39.181098  804231 kubeadm.go:157] found existing configuration files:
	
	I0916 23:56:39.181131  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:56:39.189824  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:56:39.189873  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:56:39.198165  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:56:39.206772  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:56:39.206819  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:56:39.215119  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:56:39.223660  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:56:39.223717  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:56:39.232099  804231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:56:39.240514  804231 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:56:39.240559  804231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:56:39.248850  804231 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:56:39.285897  804231 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:56:39.285950  804231 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:56:39.300660  804231 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:56:39.300727  804231 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:56:39.300801  804231 kubeadm.go:310] OS: Linux
	I0916 23:56:39.300901  804231 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:56:39.300975  804231 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:56:39.301037  804231 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:56:39.301080  804231 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:56:39.301127  804231 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:56:39.301169  804231 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:56:39.301211  804231 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:56:39.301268  804231 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:56:39.351787  804231 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:56:39.351909  804231 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:56:39.351995  804231 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:56:39.358062  804231 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:56:39.360794  804231 out.go:252]   - Generating certificates and keys ...
	I0916 23:56:39.360906  804231 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:56:39.360984  804231 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:56:39.805287  804231 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:56:40.002708  804231 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:56:40.279763  804231 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:56:40.813028  804231 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:56:41.074848  804231 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:56:41.075343  804231 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-472903 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:56:41.124880  804231 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:56:41.125041  804231 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-472903 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:56:41.707716  804231 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:56:42.089212  804231 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:56:42.627038  804231 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:56:42.627119  804231 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:56:42.823901  804231 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:56:43.022989  804231 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:56:43.163778  804231 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:56:43.708743  804231 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:56:44.024642  804231 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:56:44.025130  804231 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:56:44.027319  804231 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:56:44.029599  804231 out.go:252]   - Booting up control plane ...
	I0916 23:56:44.029737  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:56:44.029842  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:56:44.030181  804231 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:56:44.039957  804231 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:56:44.040118  804231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:56:44.047794  804231 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:56:44.048177  804231 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:56:44.048269  804231 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:56:44.122629  804231 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:56:44.122739  804231 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:56:45.124352  804231 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001822735s
	I0916 23:56:45.127338  804231 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:56:45.127477  804231 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:56:45.127582  804231 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:56:45.127694  804231 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:56:47.478256  804231 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.350892202s
	I0916 23:56:47.717698  804231 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.590223043s
	I0916 23:56:49.129161  804231 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001748341s
	I0916 23:56:49.140036  804231 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:56:49.148779  804231 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:56:49.158010  804231 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:56:49.158279  804231 kubeadm.go:310] [mark-control-plane] Marking the node ha-472903 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:56:49.165085  804231 kubeadm.go:310] [bootstrap-token] Using token: 4apri1.yqe8ok7wc4ltba21
	I0916 23:56:49.166180  804231 out.go:252]   - Configuring RBAC rules ...
	I0916 23:56:49.166328  804231 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:56:49.169225  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:56:49.174527  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:56:49.176741  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:56:49.178892  804231 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:56:49.181107  804231 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:56:49.534440  804231 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:56:49.948567  804231 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:56:50.534581  804231 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:56:50.535429  804231 kubeadm.go:310] 
	I0916 23:56:50.535529  804231 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:56:50.535542  804231 kubeadm.go:310] 
	I0916 23:56:50.535650  804231 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:56:50.535660  804231 kubeadm.go:310] 
	I0916 23:56:50.535696  804231 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:56:50.535801  804231 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:56:50.535858  804231 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:56:50.535872  804231 kubeadm.go:310] 
	I0916 23:56:50.535940  804231 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:56:50.535949  804231 kubeadm.go:310] 
	I0916 23:56:50.536027  804231 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:56:50.536037  804231 kubeadm.go:310] 
	I0916 23:56:50.536125  804231 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:56:50.536212  804231 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:56:50.536280  804231 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:56:50.536286  804231 kubeadm.go:310] 
	I0916 23:56:50.536356  804231 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:56:50.536441  804231 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:56:50.536448  804231 kubeadm.go:310] 
	I0916 23:56:50.536543  804231 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4apri1.yqe8ok7wc4ltba21 \
	I0916 23:56:50.536688  804231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 \
	I0916 23:56:50.536722  804231 kubeadm.go:310] 	--control-plane 
	I0916 23:56:50.536731  804231 kubeadm.go:310] 
	I0916 23:56:50.536842  804231 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:56:50.536857  804231 kubeadm.go:310] 
	I0916 23:56:50.536947  804231 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4apri1.yqe8ok7wc4ltba21 \
	I0916 23:56:50.537084  804231 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 
	I0916 23:56:50.539097  804231 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:56:50.539238  804231 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:56:50.539264  804231 cni.go:84] Creating CNI manager for ""
	I0916 23:56:50.539274  804231 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:56:50.540523  804231 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:56:50.541480  804231 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:56:50.545518  804231 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:56:50.545534  804231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:56:50.563251  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:56:50.762002  804231 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:56:50.762092  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:50.762127  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903 minikube.k8s.io/updated_at=2025_09_16T23_56_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=true
	I0916 23:56:50.771679  804231 ops.go:34] apiserver oom_adj: -16
	I0916 23:56:50.843646  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:51.344428  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:51.844440  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:52.344316  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:52.844594  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:53.343854  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:53.844615  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:54.344057  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:54.844066  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.344374  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.844478  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:56:55.927027  804231 kubeadm.go:1105] duration metric: took 5.165002596s to wait for elevateKubeSystemPrivileges
	I0916 23:56:55.927062  804231 kubeadm.go:394] duration metric: took 16.809033965s to StartCluster
	I0916 23:56:55.927081  804231 settings.go:142] acquiring lock: {Name:mk6c1a5bee23e141aad5180323c16c47ed580ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:55.927146  804231 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0916 23:56:55.927785  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:55.928026  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:56:55.928018  804231 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:55.928038  804231 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 23:56:55.928103  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:56:55.928121  804231 addons.go:69] Setting default-storageclass=true in profile "ha-472903"
	I0916 23:56:55.928148  804231 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-472903"
	I0916 23:56:55.928126  804231 addons.go:69] Setting storage-provisioner=true in profile "ha-472903"
	I0916 23:56:55.928222  804231 addons.go:238] Setting addon storage-provisioner=true in "ha-472903"
	I0916 23:56:55.928269  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:56:55.928296  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:55.928610  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.928740  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.954806  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:56:55.955519  804231 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0916 23:56:55.955545  804231 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0916 23:56:55.955543  804231 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0916 23:56:55.955553  804231 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 23:56:55.955611  804231 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0916 23:56:55.955620  804231 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 23:56:55.956096  804231 addons.go:238] Setting addon default-storageclass=true in "ha-472903"
	I0916 23:56:55.956145  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:56:55.956685  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:56:55.957279  804231 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:56:55.961536  804231 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:56:55.961557  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:56:55.961614  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:55.979896  804231 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:56:55.979925  804231 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:56:55.979985  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:56:55.982838  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:55.999402  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:56:56.011618  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:56:56.095355  804231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:56:56.110814  804231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:56:56.153646  804231 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:56:56.360175  804231 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0916 23:56:56.361116  804231 addons.go:514] duration metric: took 433.076562ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 23:56:56.361149  804231 start.go:246] waiting for cluster config update ...
	I0916 23:56:56.361163  804231 start.go:255] writing updated cluster config ...
	I0916 23:56:56.362407  804231 out.go:203] 
	I0916 23:56:56.363527  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:56.363621  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:56.364993  804231 out.go:179] * Starting "ha-472903-m02" control-plane node in "ha-472903" cluster
	I0916 23:56:56.365873  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:56:56.366751  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:56.367539  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:56.367556  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:56.367630  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:56.367646  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:56.367654  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:56:56.367711  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:56:56.386547  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:56.386565  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:56.386580  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:56.386607  804231 start.go:360] acquireMachinesLock for ha-472903-m02: {Name:mk81d8c73856cf84ceff1767a1681f3f3cdab773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:56.386700  804231 start.go:364] duration metric: took 70.184µs to acquireMachinesLock for "ha-472903-m02"
	I0916 23:56:56.386738  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:56:56.386824  804231 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 23:56:56.388402  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:56.388536  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:56:56.388563  804231 client.go:168] LocalClient.Create starting
	I0916 23:56:56.388626  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:56:56.388664  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:56.388687  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:56.388757  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:56:56.388789  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:56.388804  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:56.389042  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:56.404624  804231 network_create.go:77] Found existing network {name:ha-472903 subnet:0xc001d2d140 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:56:56.404653  804231 kic.go:121] calculated static IP "192.168.49.3" for the "ha-472903-m02" container
	I0916 23:56:56.404719  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:56.420231  804231 cli_runner.go:164] Run: docker volume create ha-472903-m02 --label name.minikube.sigs.k8s.io=ha-472903-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:56.436361  804231 oci.go:103] Successfully created a docker volume ha-472903-m02
	I0916 23:56:56.436430  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m02 --entrypoint /usr/bin/test -v ha-472903-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:56.943375  804231 oci.go:107] Successfully prepared a docker volume ha-472903-m02
	I0916 23:56:56.943427  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:56:56.943455  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:56.943528  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:01.091161  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.147592491s)
	I0916 23:57:01.091197  804231 kic.go:203] duration metric: took 4.147738136s to extract preloaded images to volume ...
	W0916 23:57:01.091312  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:01.091355  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:01.091403  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:01.142900  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903-m02 --name ha-472903-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903-m02 --network ha-472903 --ip 192.168.49.3 --volume ha-472903-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:01.378924  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Running}}
	I0916 23:57:01.396232  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.412927  804231 cli_runner.go:164] Run: docker exec ha-472903-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:01.469205  804231 oci.go:144] the created container "ha-472903-m02" has a running status.
	I0916 23:57:01.469235  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa...
	I0916 23:57:01.517570  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:01.517621  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:01.540818  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.560831  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:01.560858  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:01.615037  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0916 23:57:01.637921  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:01.638030  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.659741  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.660056  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.660078  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:01.800716  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0916 23:57:01.800749  804231 ubuntu.go:182] provisioning hostname "ha-472903-m02"
	I0916 23:57:01.800817  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.819791  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.820013  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.820030  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m02 && echo "ha-472903-m02" | sudo tee /etc/hostname
	I0916 23:57:01.967539  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0916 23:57:01.967631  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:01.987814  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:01.988031  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33549 <nil> <nil>}
	I0916 23:57:01.988047  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:02.121536  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:02.121571  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:57:02.121588  804231 ubuntu.go:190] setting up certificates
	I0916 23:57:02.121602  804231 provision.go:84] configureAuth start
	I0916 23:57:02.121663  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.139056  804231 provision.go:143] copyHostCerts
	I0916 23:57:02.139098  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:02.139135  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:57:02.139147  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:02.139221  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:57:02.139329  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:02.139362  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:57:02.139372  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:02.139430  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:57:02.139521  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:02.139549  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:57:02.139559  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:02.139599  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:57:02.139690  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m02 san=[127.0.0.1 192.168.49.3 ha-472903-m02 localhost minikube]
	I0916 23:57:02.262354  804231 provision.go:177] copyRemoteCerts
	I0916 23:57:02.262428  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:02.262491  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.279792  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.375833  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:02.375903  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:02.400316  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:02.400373  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:02.422506  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:02.422550  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:57:02.445091  804231 provision.go:87] duration metric: took 323.464176ms to configureAuth
	I0916 23:57:02.445121  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:02.445295  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:02.445313  804231 machine.go:96] duration metric: took 807.372883ms to provisionDockerMachine
	I0916 23:57:02.445320  804231 client.go:171] duration metric: took 6.056751196s to LocalClient.Create
	I0916 23:57:02.445337  804231 start.go:167] duration metric: took 6.056804276s to libmachine.API.Create "ha-472903"
	I0916 23:57:02.445346  804231 start.go:293] postStartSetup for "ha-472903-m02" (driver="docker")
	I0916 23:57:02.445354  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:02.445402  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:02.445461  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.463550  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.559528  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:02.562755  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:02.562780  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:02.562787  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:02.562793  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:02.562803  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:57:02.562847  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:57:02.562920  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:57:02.562930  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:57:02.563018  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:02.571142  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:02.596466  804231 start.go:296] duration metric: took 151.106324ms for postStartSetup
	I0916 23:57:02.596768  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.613316  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:02.613561  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:02.613601  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.632056  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.723085  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:02.727430  804231 start.go:128] duration metric: took 6.340577447s to createHost
	I0916 23:57:02.727453  804231 start.go:83] releasing machines lock for "ha-472903-m02", held for 6.34073897s
	I0916 23:57:02.727519  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0916 23:57:02.746152  804231 out.go:179] * Found network options:
	I0916 23:57:02.747248  804231 out.go:179]   - NO_PROXY=192.168.49.2
	W0916 23:57:02.748187  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:02.748240  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:02.748311  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:02.748360  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.748367  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:02.748427  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0916 23:57:02.765286  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.766625  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0916 23:57:02.856922  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:02.936692  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:02.936761  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:02.961822  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:02.961845  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:57:02.961878  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:02.961919  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:57:02.973318  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:02.983927  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:57:02.983969  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:57:02.996091  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:57:03.009314  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:57:03.072565  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:57:03.140469  804231 docker.go:234] disabling docker service ...
	I0916 23:57:03.140526  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:57:03.157179  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:57:03.167955  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:57:03.233386  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:57:03.296537  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:03.307574  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:03.323754  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:03.334305  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:03.343767  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:03.343826  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:03.353029  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:03.361991  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:03.371206  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:03.380598  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:03.389216  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:03.398125  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:03.407145  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:03.416183  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:03.424123  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:03.432185  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:03.493561  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:03.591942  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:57:03.592010  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:57:03.595710  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:57:03.595768  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:57:03.599108  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:03.633181  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:57:03.633231  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:03.656364  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:03.680150  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:57:03.681177  804231 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:03.682053  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:03.699720  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:03.703306  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:03.714275  804231 mustload.go:65] Loading cluster: ha-472903
	I0916 23:57:03.714452  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:03.714650  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:57:03.730631  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:03.730849  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.3
	I0916 23:57:03.730859  804231 certs.go:194] generating shared ca certs ...
	I0916 23:57:03.730877  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.730987  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:57:03.731023  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:57:03.731032  804231 certs.go:256] generating profile certs ...
	I0916 23:57:03.731092  804231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:57:03.731114  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a
	I0916 23:57:03.731125  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 23:57:03.830248  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a ...
	I0916 23:57:03.830275  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a: {Name:mk3e97859392ca0d50685e4c31c19acd3c590753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.830438  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a ...
	I0916 23:57:03.830453  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a: {Name:mkd3ec6288ef831df369d4ec39839c410f5116ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:03.830530  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.d67fba4a -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:57:03.830653  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:57:03.830779  804231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:57:03.830794  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:03.830809  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:03.830823  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:03.830836  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:03.830846  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:03.830855  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:03.830864  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:03.830873  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:03.830920  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:57:03.830952  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:03.830962  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:03.830981  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:03.831001  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:03.831021  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:57:03.831058  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:03.831081  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:57:03.831094  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:57:03.831107  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:03.831156  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:03.847964  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:03.934599  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:03.938331  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:03.950286  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:03.953541  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:03.965169  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:03.968351  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:03.979814  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:03.982969  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:03.993972  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:03.997171  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:04.008607  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:04.011687  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 23:57:04.023019  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:04.046509  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:04.069781  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:04.092702  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:04.114933  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 23:57:04.137173  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0916 23:57:04.159280  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:04.181367  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:04.203980  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:57:04.230248  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:57:04.253628  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:04.276223  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:04.293552  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:04.309978  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:04.326237  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:04.342704  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:04.359099  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 23:57:04.375242  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:04.391611  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:57:04.396637  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:57:04.405389  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.408604  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.408651  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:57:04.414862  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:04.423583  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:57:04.432421  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.435706  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.435752  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:57:04.441863  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:04.450595  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:04.459588  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.462866  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.462907  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:04.469279  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:04.478135  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:04.481236  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:04.481288  804231 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0916 23:57:04.481383  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:04.481425  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:04.481462  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:04.492937  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:04.492999  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:04.493041  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:04.501084  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:04.501123  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:04.509217  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 23:57:04.525587  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:04.544042  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:04.561542  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:04.564725  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:04.574819  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:04.638378  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:04.659569  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:04.659878  804231 start.go:317] joinCluster: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:04.659986  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:04.660033  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:04.678136  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:04.817608  804231 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:04.817663  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 79akng.11lpa8n1ba4yh5m1 --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0916 23:57:23.327384  804231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 79akng.11lpa8n1ba4yh5m1 --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.509693377s)
	I0916 23:57:23.327447  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:23.521334  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903-m02 minikube.k8s.io/updated_at=2025_09_16T23_57_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=false
	I0916 23:57:23.592991  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-472903-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:23.664899  804231 start.go:319] duration metric: took 19.005017018s to joinCluster
	I0916 23:57:23.664975  804231 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:23.665223  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:23.665877  804231 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:23.666680  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:23.766393  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:23.779164  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:23.779228  804231 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:23.779511  804231 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m02" to be "Ready" ...
	I0916 23:57:24.283593  804231 node_ready.go:49] node "ha-472903-m02" is "Ready"
	I0916 23:57:24.283628  804231 node_ready.go:38] duration metric: took 504.097895ms for node "ha-472903-m02" to be "Ready" ...
	I0916 23:57:24.283648  804231 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:24.283699  804231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:24.295735  804231 api_server.go:72] duration metric: took 630.723924ms to wait for apiserver process to appear ...
	I0916 23:57:24.295758  804231 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:24.295774  804231 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:24.299650  804231 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:24.300537  804231 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:24.300558  804231 api_server.go:131] duration metric: took 4.795429ms to wait for apiserver health ...
	I0916 23:57:24.300566  804231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:24.304572  804231 system_pods.go:59] 19 kube-system pods found
	I0916 23:57:24.304598  804231 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:24.304604  804231 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:24.304608  804231 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:24.304611  804231 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Pending
	I0916 23:57:24.304615  804231 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:24.304621  804231 system_pods.go:61] "kindnet-mwf8l" [8c9533d3-defe-487b-a9b4-0502fb8f2d2a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mwf8l": pod kindnet-mwf8l is being deleted, cannot be assigned to a host)
	I0916 23:57:24.304628  804231 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-q7c7s": pod kindnet-q7c7s is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304639  804231 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:24.304643  804231 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Pending
	I0916 23:57:24.304646  804231 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:24.304650  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Pending
	I0916 23:57:24.304657  804231 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-58lkb": pod kube-proxy-58lkb is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304662  804231 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:24.304666  804231 system_pods.go:61] "kube-proxy-mf26q" [34502b32-75c1-4078-abd2-4e4d625252d8] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-mf26q": pod kube-proxy-mf26q is already assigned to node "ha-472903-m02")
	I0916 23:57:24.304670  804231 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:24.304677  804231 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Pending
	I0916 23:57:24.304679  804231 system_pods.go:61] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:24.304682  804231 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Pending
	I0916 23:57:24.304687  804231 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:24.304694  804231 system_pods.go:74] duration metric: took 4.122792ms to wait for pod list to return data ...
	I0916 23:57:24.304700  804231 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:24.307165  804231 default_sa.go:45] found service account: "default"
	I0916 23:57:24.307183  804231 default_sa.go:55] duration metric: took 2.474442ms for default service account to be created ...
	I0916 23:57:24.307190  804231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:24.310491  804231 system_pods.go:86] 19 kube-system pods found
	I0916 23:57:24.310512  804231 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:24.310517  804231 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:24.310520  804231 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:24.310524  804231 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Pending
	I0916 23:57:24.310527  804231 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:24.310532  804231 system_pods.go:89] "kindnet-mwf8l" [8c9533d3-defe-487b-a9b4-0502fb8f2d2a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mwf8l": pod kindnet-mwf8l is being deleted, cannot be assigned to a host)
	I0916 23:57:24.310556  804231 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-q7c7s": pod kindnet-q7c7s is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310566  804231 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:24.310571  804231 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Pending
	I0916 23:57:24.310576  804231 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:24.310580  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Pending
	I0916 23:57:24.310588  804231 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-58lkb": pod kube-proxy-58lkb is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310591  804231 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:24.310596  804231 system_pods.go:89] "kube-proxy-mf26q" [34502b32-75c1-4078-abd2-4e4d625252d8] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-mf26q": pod kube-proxy-mf26q is already assigned to node "ha-472903-m02")
	I0916 23:57:24.310600  804231 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:24.310603  804231 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Pending
	I0916 23:57:24.310608  804231 system_pods.go:89] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:24.310611  804231 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Pending
	I0916 23:57:24.310614  804231 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:24.310621  804231 system_pods.go:126] duration metric: took 3.426124ms to wait for k8s-apps to be running ...
	I0916 23:57:24.310629  804231 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:24.310666  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:24.322152  804231 system_svc.go:56] duration metric: took 11.515834ms WaitForService to wait for kubelet
	I0916 23:57:24.322176  804231 kubeadm.go:578] duration metric: took 657.167547ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:24.322199  804231 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:24.327707  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:24.327734  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:24.327748  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:24.327754  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:24.327759  804231 node_conditions.go:105] duration metric: took 5.554046ms to run NodePressure ...
	I0916 23:57:24.327772  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:57:24.327803  804231 start.go:255] writing updated cluster config ...
	I0916 23:57:24.329316  804231 out.go:203] 
	I0916 23:57:24.330356  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:24.330485  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:24.331956  804231 out.go:179] * Starting "ha-472903-m03" control-plane node in "ha-472903" cluster
	I0916 23:57:24.332973  804231 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:57:24.333962  804231 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:24.334852  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:57:24.334875  804231 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:24.334942  804231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:24.334986  804231 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:24.334997  804231 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0916 23:57:24.335117  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:24.357217  804231 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:24.357233  804231 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:24.357242  804231 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:24.357267  804231 start.go:360] acquireMachinesLock for ha-472903-m03: {Name:mk61000bb8e4699ca3310a7fc257e30a156b69de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:24.357354  804231 start.go:364] duration metric: took 71.354µs to acquireMachinesLock for "ha-472903-m03"
	I0916 23:57:24.357375  804231 start.go:93] Provisioning new machine with config: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:24.357498  804231 start.go:125] createHost starting for "m03" (driver="docker")
	I0916 23:57:24.358917  804231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:24.358994  804231 start.go:159] libmachine.API.Create for "ha-472903" (driver="docker")
	I0916 23:57:24.359023  804231 client.go:168] LocalClient.Create starting
	I0916 23:57:24.359071  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem
	I0916 23:57:24.359103  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:24.359116  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:24.359164  804231 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem
	I0916 23:57:24.359182  804231 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:24.359192  804231 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:24.359366  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:24.375654  804231 network_create.go:77] Found existing network {name:ha-472903 subnet:0xc001b33bf0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:24.375684  804231 kic.go:121] calculated static IP "192.168.49.4" for the "ha-472903-m03" container
	I0916 23:57:24.375740  804231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:24.392165  804231 cli_runner.go:164] Run: docker volume create ha-472903-m03 --label name.minikube.sigs.k8s.io=ha-472903-m03 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:24.408273  804231 oci.go:103] Successfully created a docker volume ha-472903-m03
	I0916 23:57:24.408342  804231 cli_runner.go:164] Run: docker run --rm --name ha-472903-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m03 --entrypoint /usr/bin/test -v ha-472903-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:24.957699  804231 oci.go:107] Successfully prepared a docker volume ha-472903-m03
	I0916 23:57:24.957748  804231 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:57:24.957783  804231 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:24.957856  804231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:29.095091  804231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-472903-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.13717471s)
	I0916 23:57:29.095123  804231 kic.go:203] duration metric: took 4.137337977s to extract preloaded images to volume ...
	W0916 23:57:29.095214  804231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:29.095253  804231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:29.095300  804231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:29.145859  804231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472903-m03 --name ha-472903-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472903-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472903-m03 --network ha-472903 --ip 192.168.49.4 --volume ha-472903-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:29.392873  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Running}}
	I0916 23:57:29.412389  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:29.430593  804231 cli_runner.go:164] Run: docker exec ha-472903-m03 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:29.476672  804231 oci.go:144] the created container "ha-472903-m03" has a running status.
	I0916 23:57:29.476707  804231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa...
	I0916 23:57:29.927926  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:29.927968  804231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:29.954518  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:29.975503  804231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:29.975522  804231 kic_runner.go:114] Args: [docker exec --privileged ha-472903-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:30.023965  804231 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0916 23:57:30.040966  804231 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:30.041051  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.058157  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.058388  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.058400  804231 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:30.190964  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0916 23:57:30.190995  804231 ubuntu.go:182] provisioning hostname "ha-472903-m03"
	I0916 23:57:30.191059  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.208862  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.209123  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.209144  804231 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m03 && echo "ha-472903-m03" | sudo tee /etc/hostname
	I0916 23:57:30.354363  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0916 23:57:30.354466  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.372285  804231 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.372570  804231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33554 <nil> <nil>}
	I0916 23:57:30.372590  804231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:30.504861  804231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:30.504898  804231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0916 23:57:30.504920  804231 ubuntu.go:190] setting up certificates
	I0916 23:57:30.504933  804231 provision.go:84] configureAuth start
	I0916 23:57:30.504996  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:30.522218  804231 provision.go:143] copyHostCerts
	I0916 23:57:30.522259  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:30.522297  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0916 23:57:30.522306  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0916 23:57:30.522369  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0916 23:57:30.522483  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:30.522506  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0916 23:57:30.522510  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0916 23:57:30.522547  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0916 23:57:30.522650  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:30.522673  804231 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0916 23:57:30.522678  804231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0916 23:57:30.522703  804231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0916 23:57:30.522769  804231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m03 san=[127.0.0.1 192.168.49.4 ha-472903-m03 localhost minikube]
	I0916 23:57:30.644066  804231 provision.go:177] copyRemoteCerts
	I0916 23:57:30.644118  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:30.644153  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.661612  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:30.757452  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:30.757504  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:30.782942  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:30.782994  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:30.806508  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:30.806562  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:30.829686  804231 provision.go:87] duration metric: took 324.735799ms to configureAuth
	I0916 23:57:30.829709  804231 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:30.829902  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:30.829916  804231 machine.go:96] duration metric: took 788.930334ms to provisionDockerMachine
	I0916 23:57:30.829925  804231 client.go:171] duration metric: took 6.470893656s to LocalClient.Create
	I0916 23:57:30.829958  804231 start.go:167] duration metric: took 6.470963089s to libmachine.API.Create "ha-472903"
	I0916 23:57:30.829971  804231 start.go:293] postStartSetup for "ha-472903-m03" (driver="docker")
	I0916 23:57:30.829982  804231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:30.830042  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:30.830092  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:30.847215  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:30.945849  804231 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:30.949055  804231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:30.949086  804231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:30.949098  804231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:30.949107  804231 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:30.949120  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0916 23:57:30.949174  804231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0916 23:57:30.949274  804231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0916 23:57:30.949286  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0916 23:57:30.949392  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:30.957998  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:30.983779  804231 start.go:296] duration metric: took 153.794843ms for postStartSetup
	I0916 23:57:30.984109  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:31.001367  804231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0916 23:57:31.001618  804231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:31.001659  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.019034  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.110814  804231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:31.115046  804231 start.go:128] duration metric: took 6.757532739s to createHost
	I0916 23:57:31.115072  804231 start.go:83] releasing machines lock for "ha-472903-m03", held for 6.757707303s
	I0916 23:57:31.115154  804231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0916 23:57:31.133371  804231 out.go:179] * Found network options:
	I0916 23:57:31.134481  804231 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 23:57:31.135570  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135598  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135626  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:31.135644  804231 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:31.135714  804231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:31.135763  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.135778  804231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:31.135845  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0916 23:57:31.152320  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.153909  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0916 23:57:31.320495  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:31.348141  804231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:31.348214  804231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:31.373693  804231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:31.373720  804231 start.go:495] detecting cgroup driver to use...
	I0916 23:57:31.373748  804231 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:31.373802  804231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 23:57:31.385560  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:31.396165  804231 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:57:31.396214  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:57:31.409119  804231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:57:31.422244  804231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:57:31.489491  804231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:57:31.557098  804231 docker.go:234] disabling docker service ...
	I0916 23:57:31.557149  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:57:31.574601  804231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:57:31.585773  804231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:57:31.649988  804231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:57:31.717070  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:31.727904  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:31.743685  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:31.755962  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:31.766072  804231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:31.766138  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:31.775522  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:31.785914  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:31.795134  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:31.804565  804231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:31.813319  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:31.822500  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:31.831597  804231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:31.840887  804231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:31.848842  804231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:31.857026  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:31.920521  804231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:32.022746  804231 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 23:57:32.022804  804231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 23:57:32.026838  804231 start.go:563] Will wait 60s for crictl version
	I0916 23:57:32.026888  804231 ssh_runner.go:195] Run: which crictl
	I0916 23:57:32.030295  804231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:32.064100  804231 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0916 23:57:32.064158  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:32.088276  804231 ssh_runner.go:195] Run: containerd --version
	I0916 23:57:32.114182  804231 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0916 23:57:32.115194  804231 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:32.116236  804231 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 23:57:32.117151  804231 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:32.133290  804231 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:32.136901  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:32.147860  804231 mustload.go:65] Loading cluster: ha-472903
	I0916 23:57:32.148060  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:32.148275  804231 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0916 23:57:32.164278  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:32.164570  804231 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.4
	I0916 23:57:32.164584  804231 certs.go:194] generating shared ca certs ...
	I0916 23:57:32.164601  804231 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.164751  804231 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0916 23:57:32.164800  804231 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0916 23:57:32.164814  804231 certs.go:256] generating profile certs ...
	I0916 23:57:32.164911  804231 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0916 23:57:32.164940  804231 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8
	I0916 23:57:32.164958  804231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 23:57:32.342596  804231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 ...
	I0916 23:57:32.342623  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8: {Name:mk455c3f0ae4544ddcdf75c25cbd1b87a24e61a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.342787  804231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8 ...
	I0916 23:57:32.342799  804231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8: {Name:mkbd551bf9ae23c129f7e263550d20b4aac5d095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:32.342871  804231 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.14b885b8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0916 23:57:32.343007  804231 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0916 23:57:32.343136  804231 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0916 23:57:32.343152  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:32.343165  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:32.343178  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:32.343191  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:32.343204  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:32.343214  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:32.343229  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:32.343247  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:32.343299  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0916 23:57:32.343327  804231 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:32.343337  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:32.343357  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:32.343379  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:32.343400  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0916 23:57:32.343464  804231 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0916 23:57:32.343501  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.343521  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.343534  804231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.343588  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:32.360782  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:32.447595  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:32.451217  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:32.464033  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:32.467273  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:32.478860  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:32.482180  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:32.493717  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:32.496761  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:32.507849  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:32.511054  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:32.523733  804231 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:32.526954  804231 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 23:57:32.538314  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:32.561866  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:32.585900  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:32.610048  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:32.634812  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 23:57:32.659163  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:32.682157  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:32.704663  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:32.727856  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:32.752740  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0916 23:57:32.775900  804231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0916 23:57:32.798720  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:32.815542  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:32.832241  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:32.848964  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:32.865780  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:32.882614  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 23:57:32.899296  804231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:32.916516  804231 ssh_runner.go:195] Run: openssl version
	I0916 23:57:32.921611  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0916 23:57:32.930917  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.934241  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.934283  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0916 23:57:32.941354  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:32.950335  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:32.959292  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.962576  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.962623  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:32.968989  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:32.978331  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0916 23:57:32.987188  804231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.990463  804231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.990497  804231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0916 23:57:32.996813  804231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:33.005924  804231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:33.009122  804231 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:33.009183  804231 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0916 23:57:33.009266  804231 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:33.009291  804231 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:33.009319  804231 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:33.021189  804231 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:33.021246  804231 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:33.021293  804231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:33.029533  804231 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:33.029576  804231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:33.038861  804231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 23:57:33.056092  804231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:33.075506  804231 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:33.093918  804231 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:33.097171  804231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:33.107668  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:33.167706  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:33.188453  804231 host.go:66] Checking if "ha-472903" exists ...
	I0916 23:57:33.188671  804231 start.go:317] joinCluster: &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:33.188781  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:33.188819  804231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0916 23:57:33.210165  804231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0916 23:57:33.351871  804231 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:33.351930  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uj456s.97hymgg3kmg6owuv --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0916 23:57:51.860237  804231 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uj456s.97hymgg3kmg6owuv --discovery-token-ca-cert-hash sha256:52c78ec9ad9a2dc0941e43ce337b864c76ea573e452bc75ed737e69ad76deac1 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-472903-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (18.508258539s)
	I0916 23:57:51.860308  804231 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:52.080986  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472903-m03 minikube.k8s.io/updated_at=2025_09_16T23_57_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-472903 minikube.k8s.io/primary=false
	I0916 23:57:52.152525  804231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-472903-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:52.226560  804231 start.go:319] duration metric: took 19.037884553s to joinCluster
	I0916 23:57:52.226624  804231 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 23:57:52.226912  804231 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:57:52.227744  804231 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:52.228620  804231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:52.334638  804231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:52.349036  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:52.349105  804231 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:52.349317  804231 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m03" to be "Ready" ...
	I0916 23:57:54.352346  804231 node_ready.go:49] node "ha-472903-m03" is "Ready"
	I0916 23:57:54.352374  804231 node_ready.go:38] duration metric: took 2.003044453s for node "ha-472903-m03" to be "Ready" ...
	I0916 23:57:54.352389  804231 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:54.352476  804231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:54.365259  804231 api_server.go:72] duration metric: took 2.138606454s to wait for apiserver process to appear ...
	I0916 23:57:54.365280  804231 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:54.365298  804231 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:54.370985  804231 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:54.371831  804231 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:54.371850  804231 api_server.go:131] duration metric: took 6.564025ms to wait for apiserver health ...
	I0916 23:57:54.371858  804231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:54.376785  804231 system_pods.go:59] 27 kube-system pods found
	I0916 23:57:54.376811  804231 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:54.376815  804231 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:54.376818  804231 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:54.376822  804231 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0916 23:57:54.376824  804231 system_pods.go:61] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Pending
	I0916 23:57:54.376830  804231 system_pods.go:61] "kindnet-2dqnn" [f5c4164d-0d88-4b7b-bc52-18a7e211fe98] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2dqnn": pod kindnet-2dqnn is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376833  804231 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:54.376838  804231 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0916 23:57:54.376842  804231 system_pods.go:61] "kindnet-wwdfr" [e86a6e30-712e-4d39-a235-87489d16c0f3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wwdfr": pod kindnet-wwdfr is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376849  804231 system_pods.go:61] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Pending: PodScheduled:SchedulerError (pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) is in the cache, so can't be assumed)
	I0916 23:57:54.376853  804231 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:54.376858  804231 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running
	I0916 23:57:54.376861  804231 system_pods.go:61] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Pending
	I0916 23:57:54.376867  804231 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:54.376870  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0916 23:57:54.376873  804231 system_pods.go:61] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Pending
	I0916 23:57:54.376876  804231 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0916 23:57:54.376881  804231 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:54.376885  804231 system_pods.go:61] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-kn6nb": pod kube-proxy-kn6nb is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376889  804231 system_pods.go:61] "kube-proxy-xhlnz" [1967fed1-7529-46d0-accd-ab74751b47fa] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-xhlnz": pod kube-proxy-xhlnz is already assigned to node "ha-472903-m03")
	I0916 23:57:54.376894  804231 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:54.376897  804231 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0916 23:57:54.376900  804231 system_pods.go:61] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Pending
	I0916 23:57:54.376904  804231 system_pods.go:61] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:54.376907  804231 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0916 23:57:54.376910  804231 system_pods.go:61] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Pending
	I0916 23:57:54.376913  804231 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:54.376918  804231 system_pods.go:74] duration metric: took 5.052009ms to wait for pod list to return data ...
	I0916 23:57:54.376925  804231 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:54.378969  804231 default_sa.go:45] found service account: "default"
	I0916 23:57:54.378989  804231 default_sa.go:55] duration metric: took 2.056584ms for default service account to be created ...
	I0916 23:57:54.378999  804231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:54.383753  804231 system_pods.go:86] 27 kube-system pods found
	I0916 23:57:54.383781  804231 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running
	I0916 23:57:54.383790  804231 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running
	I0916 23:57:54.383796  804231 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0916 23:57:54.383802  804231 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0916 23:57:54.383812  804231 system_pods.go:89] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Pending
	I0916 23:57:54.383821  804231 system_pods.go:89] "kindnet-2dqnn" [f5c4164d-0d88-4b7b-bc52-18a7e211fe98] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2dqnn": pod kindnet-2dqnn is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383829  804231 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0916 23:57:54.383837  804231 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0916 23:57:54.383842  804231 system_pods.go:89] "kindnet-wwdfr" [e86a6e30-712e-4d39-a235-87489d16c0f3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wwdfr": pod kindnet-wwdfr is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383852  804231 system_pods.go:89] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Pending: PodScheduled:SchedulerError (pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) is in the cache, so can't be assumed)
	I0916 23:57:54.383863  804231 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running
	I0916 23:57:54.383874  804231 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running
	I0916 23:57:54.383881  804231 system_pods.go:89] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Pending
	I0916 23:57:54.383887  804231 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0916 23:57:54.383895  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0916 23:57:54.383900  804231 system_pods.go:89] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Pending
	I0916 23:57:54.383908  804231 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0916 23:57:54.383913  804231 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0916 23:57:54.383921  804231 system_pods.go:89] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-kn6nb": pod kube-proxy-kn6nb is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383930  804231 system_pods.go:89] "kube-proxy-xhlnz" [1967fed1-7529-46d0-accd-ab74751b47fa] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-xhlnz": pod kube-proxy-xhlnz is already assigned to node "ha-472903-m03")
	I0916 23:57:54.383939  804231 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running
	I0916 23:57:54.383946  804231 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0916 23:57:54.383955  804231 system_pods.go:89] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Pending
	I0916 23:57:54.383962  804231 system_pods.go:89] "kube-vip-ha-472903" [ccdab212-cf0c-4bf0-958b-173e1008f7bc] Running
	I0916 23:57:54.383967  804231 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0916 23:57:54.383975  804231 system_pods.go:89] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Pending
	I0916 23:57:54.383980  804231 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0916 23:57:54.383991  804231 system_pods.go:126] duration metric: took 4.985254ms to wait for k8s-apps to be running ...
	I0916 23:57:54.384002  804231 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:54.384056  804231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:54.395540  804231 system_svc.go:56] duration metric: took 11.532177ms WaitForService to wait for kubelet
	I0916 23:57:54.395557  804231 kubeadm.go:578] duration metric: took 2.168909422s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:54.395577  804231 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:54.398165  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398183  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398194  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398197  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398201  804231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:54.398205  804231 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:54.398209  804231 node_conditions.go:105] duration metric: took 2.627179ms to run NodePressure ...
	I0916 23:57:54.398219  804231 start.go:241] waiting for startup goroutines ...
	I0916 23:57:54.398248  804231 start.go:255] writing updated cluster config ...
	I0916 23:57:54.398554  804231 ssh_runner.go:195] Run: rm -f paused
	I0916 23:57:54.402187  804231 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:57:54.402686  804231 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:57:54.405144  804231 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c94hz" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.409401  804231 pod_ready.go:94] pod "coredns-66bc5c9577-c94hz" is "Ready"
	I0916 23:57:54.409438  804231 pod_ready.go:86] duration metric: took 4.271645ms for pod "coredns-66bc5c9577-c94hz" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.409448  804231 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qn8m7" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.413536  804231 pod_ready.go:94] pod "coredns-66bc5c9577-qn8m7" is "Ready"
	I0916 23:57:54.413553  804231 pod_ready.go:86] duration metric: took 4.095453ms for pod "coredns-66bc5c9577-qn8m7" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.415699  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.419599  804231 pod_ready.go:94] pod "etcd-ha-472903" is "Ready"
	I0916 23:57:54.419618  804231 pod_ready.go:86] duration metric: took 3.899664ms for pod "etcd-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.419627  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.423363  804231 pod_ready.go:94] pod "etcd-ha-472903-m02" is "Ready"
	I0916 23:57:54.423380  804231 pod_ready.go:86] duration metric: took 3.746731ms for pod "etcd-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.423386  804231 pod_ready.go:83] waiting for pod "etcd-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:54.603706  804231 request.go:683] "Waited before sending request" delay="180.227617ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-472903-m03"
	I0916 23:57:54.803902  804231 request.go:683] "Waited before sending request" delay="197.349252ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:55.003954  804231 request.go:683] "Waited before sending request" delay="80.206914ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-472903-m03"
	I0916 23:57:55.203362  804231 request.go:683] "Waited before sending request" delay="196.197515ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:55.206052  804231 pod_ready.go:94] pod "etcd-ha-472903-m03" is "Ready"
	I0916 23:57:55.206075  804231 pod_ready.go:86] duration metric: took 782.683771ms for pod "etcd-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.403450  804231 request.go:683] "Waited before sending request" delay="197.254129ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0916 23:57:55.406629  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.604081  804231 request.go:683] "Waited before sending request" delay="197.327981ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903"
	I0916 23:57:55.803277  804231 request.go:683] "Waited before sending request" delay="196.28238ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:55.806023  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903" is "Ready"
	I0916 23:57:55.806053  804231 pod_ready.go:86] duration metric: took 399.400731ms for pod "kube-apiserver-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:55.806064  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.003360  804231 request.go:683] "Waited before sending request" delay="197.181089ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903-m02"
	I0916 23:57:56.203591  804231 request.go:683] "Waited before sending request" delay="197.334062ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:56.206593  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903-m02" is "Ready"
	I0916 23:57:56.206619  804231 pod_ready.go:86] duration metric: took 400.548564ms for pod "kube-apiserver-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.206627  804231 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.404053  804231 request.go:683] "Waited before sending request" delay="197.330591ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472903-m03"
	I0916 23:57:56.603366  804231 request.go:683] "Waited before sending request" delay="196.334008ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:56.606216  804231 pod_ready.go:94] pod "kube-apiserver-ha-472903-m03" is "Ready"
	I0916 23:57:56.606240  804231 pod_ready.go:86] duration metric: took 399.60823ms for pod "kube-apiserver-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:56.803696  804231 request.go:683] "Waited before sending request" delay="197.341894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0916 23:57:56.806878  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.003237  804231 request.go:683] "Waited before sending request" delay="196.261492ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903"
	I0916 23:57:57.203189  804231 request.go:683] "Waited before sending request" delay="197.16206ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:57.205847  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903" is "Ready"
	I0916 23:57:57.205870  804231 pod_ready.go:86] duration metric: took 398.97003ms for pod "kube-controller-manager-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.205878  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.403223  804231 request.go:683] "Waited before sending request" delay="197.233762ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903-m02"
	I0916 23:57:57.603503  804231 request.go:683] "Waited before sending request" delay="197.308924ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:57.606309  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903-m02" is "Ready"
	I0916 23:57:57.606331  804231 pod_ready.go:86] duration metric: took 400.447455ms for pod "kube-controller-manager-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.606339  804231 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:57.803572  804231 request.go:683] "Waited before sending request" delay="197.156861ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472903-m03"
	I0916 23:57:58.003564  804231 request.go:683] "Waited before sending request" delay="197.308739ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:58.006495  804231 pod_ready.go:94] pod "kube-controller-manager-ha-472903-m03" is "Ready"
	I0916 23:57:58.006527  804231 pod_ready.go:86] duration metric: took 400.177209ms for pod "kube-controller-manager-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.203971  804231 request.go:683] "Waited before sending request" delay="197.330656ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0916 23:57:58.207087  804231 pod_ready.go:83] waiting for pod "kube-proxy-58lkb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.403484  804231 request.go:683] "Waited before sending request" delay="196.298118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-58lkb"
	I0916 23:57:58.603727  804231 request.go:683] "Waited before sending request" delay="197.238459ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m02"
	I0916 23:57:58.606561  804231 pod_ready.go:94] pod "kube-proxy-58lkb" is "Ready"
	I0916 23:57:58.606586  804231 pod_ready.go:86] duration metric: took 399.476011ms for pod "kube-proxy-58lkb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.606593  804231 pod_ready.go:83] waiting for pod "kube-proxy-d4m8f" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:58.804003  804231 request.go:683] "Waited before sending request" delay="197.323847ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d4m8f"
	I0916 23:57:59.003937  804231 request.go:683] "Waited before sending request" delay="197.340178ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903"
	I0916 23:57:59.006899  804231 pod_ready.go:94] pod "kube-proxy-d4m8f" is "Ready"
	I0916 23:57:59.006927  804231 pod_ready.go:86] duration metric: took 400.327971ms for pod "kube-proxy-d4m8f" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:59.006938  804231 pod_ready.go:83] waiting for pod "kube-proxy-kn6nb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:57:59.203366  804231 request.go:683] "Waited before sending request" delay="196.341882ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kn6nb"
	I0916 23:57:59.403608  804231 request.go:683] "Waited before sending request" delay="197.193431ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:57:59.604047  804231 request.go:683] "Waited before sending request" delay="96.244025ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kn6nb"
	I0916 23:57:59.803112  804231 request.go:683] "Waited before sending request" delay="196.282766ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:58:00.203120  804231 request.go:683] "Waited before sending request" delay="192.276334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	I0916 23:58:00.603459  804231 request.go:683] "Waited before sending request" delay="93.218157ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472903-m03"
	W0916 23:58:01.014543  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:03.512871  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:06.012965  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:08.512763  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:11.012966  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:13.013166  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:15.512655  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:18.012615  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:20.513188  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:23.012908  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:25.013240  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:27.512733  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:30.012142  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:32.012503  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:34.013070  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	W0916 23:58:36.512643  804231 pod_ready.go:104] pod "kube-proxy-kn6nb" is not "Ready", error: <nil>
	I0916 23:58:37.014670  804231 pod_ready.go:94] pod "kube-proxy-kn6nb" is "Ready"
	I0916 23:58:37.014697  804231 pod_ready.go:86] duration metric: took 38.007753603s for pod "kube-proxy-kn6nb" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.017732  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.022228  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903" is "Ready"
	I0916 23:58:37.022246  804231 pod_ready.go:86] duration metric: took 4.488553ms for pod "kube-scheduler-ha-472903" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.022253  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.026173  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903-m02" is "Ready"
	I0916 23:58:37.026191  804231 pod_ready.go:86] duration metric: took 3.932068ms for pod "kube-scheduler-ha-472903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.026198  804231 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.030029  804231 pod_ready.go:94] pod "kube-scheduler-ha-472903-m03" is "Ready"
	I0916 23:58:37.030046  804231 pod_ready.go:86] duration metric: took 3.843487ms for pod "kube-scheduler-ha-472903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:37.030054  804231 pod_ready.go:40] duration metric: took 42.627839542s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:58:37.073472  804231 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0916 23:58:37.074923  804231 out.go:179] * Done! kubectl is now configured to use "ha-472903" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0a41d8b587e02       8c811b4aec35f       14 minutes ago      Running             busybox                   0                   a2422ee3e6e6d       busybox-7b57f96db7-6hrm6
	f33de265effb1       6e38f40d628db       15 minutes ago      Running             storage-provisioner       1                   1c0713f862ea0       storage-provisioner
	9f103b05d2d6f       52546a367cc9e       15 minutes ago      Running             coredns                   0                   9579263342827       coredns-66bc5c9577-c94hz
	3b457407f10e3       52546a367cc9e       15 minutes ago      Running             coredns                   0                   290cfb537788e       coredns-66bc5c9577-qn8m7
	cc69d2451cb65       409467f978b4a       15 minutes ago      Running             kindnet-cni               0                   3e17d6ae9b2a6       kindnet-lh7dv
	f4767b6363ce9       6e38f40d628db       15 minutes ago      Exited              storage-provisioner       0                   1c0713f862ea0       storage-provisioner
	92dd4d116eb03       df0860106674d       15 minutes ago      Running             kube-proxy                0                   8c0ecd5301326       kube-proxy-d4m8f
	3cb75495f7a54       765655ea60781       16 minutes ago      Running             kube-vip                  0                   4c425da29992d       kube-vip-ha-472903
	bba28cace6502       46169d968e920       16 minutes ago      Running             kube-scheduler            0                   f18dd7697c60f       kube-scheduler-ha-472903
	087290a41f59c       a0af72f2ec6d6       16 minutes ago      Running             kube-controller-manager   0                   0760ebe1d2a56       kube-controller-manager-ha-472903
	0aba62132d764       90550c43ad2bc       16 minutes ago      Running             kube-apiserver            0                   8ad1fa8bc0267       kube-apiserver-ha-472903
	23c0af0bdbe95       5f1f5298c888d       16 minutes ago      Running             etcd                      0                   b01a62742caec       etcd-ha-472903
	
	
	==> containerd <==
	Sep 16 23:57:20 ha-472903 containerd[765]: time="2025-09-16T23:57:20.857383931Z" level=info msg="StartContainer for \"9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315\""
	Sep 16 23:57:20 ha-472903 containerd[765]: time="2025-09-16T23:57:20.915209442Z" level=info msg="StartContainer for \"9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315\" returns successfully"
	Sep 16 23:57:26 ha-472903 containerd[765]: time="2025-09-16T23:57:26.847849669Z" level=info msg="received exit event container_id:\"f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8\"  id:\"f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8\"  pid:2188  exit_status:1  exited_at:{seconds:1758067046  nanos:847300745}"
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084468964Z" level=info msg="shim disconnected" id=f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8 namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084514637Z" level=warning msg="cleaning up after shim disconnected" id=f4767b6363ce9c18f8b183cfefd42f69a8b6845fea9e30eec23d90668bc0a3f8 namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.084528446Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.861023305Z" level=info msg="CreateContainer within sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.875038922Z" level=info msg="CreateContainer within sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\""
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.875884762Z" level=info msg="StartContainer for \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\""
	Sep 16 23:57:29 ha-472903 containerd[765]: time="2025-09-16T23:57:29.929708067Z" level=info msg="StartContainer for \"f33de265effb1050318db82caef7df35706c6a78a2f601466a28e71f4048fedc\" returns successfully"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.362974621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-6hrm6,Uid:bd03bad4-af1e-42d0-81fb-6fcaeaa8775e,Namespace:default,Attempt:0,}"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.455106923Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.455480779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-6hrm6,Uid:bd03bad4-af1e-42d0-81fb-6fcaeaa8775e,Namespace:default,Attempt:0,} returns sandbox id \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\""
	Sep 16 23:58:40 ha-472903 containerd[765]: time="2025-09-16T23:58:40.457290181Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.440332779Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.440968214Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.442025332Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.443719507Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.444221405Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 1.986887608s"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.444254598Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.447875079Z" level=info msg="CreateContainer within sandbox \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.457018566Z" level=info msg="CreateContainer within sandbox \"a2422ee3e6e6de2b23238fbdd05d962f2c25009569227e21869b285e5353e70a\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.457508138Z" level=info msg="StartContainer for \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\""
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.510633374Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 16 23:58:42 ha-472903 containerd[765]: time="2025-09-16T23:58:42.512731136Z" level=info msg="StartContainer for \"0a41d8b587e021b881cec481dd5e13b98d695cdba1671a8eb501413b121637d8\" returns successfully"
	
	
	==> coredns [3b457407f10e357ce33da7fa3fb4333f8312f0d3e3570cf8528cdcac8f5a1d0f] <==
	[INFO] 10.244.1.2:57899 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.012540337s
	[INFO] 10.244.1.2:54323 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.008980197s
	[INFO] 10.244.1.2:53799 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.009949044s
	[INFO] 10.244.0.4:39485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157098s
	[INFO] 10.244.0.4:57871 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000750185s
	[INFO] 10.244.0.4:53410 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000089028s
	[INFO] 10.244.1.2:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150317s
	[INFO] 10.244.1.2:59346 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028128363s
	[INFO] 10.244.1.2:43091 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01004668s
	[INFO] 10.244.1.2:37227 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000191819s
	[INFO] 10.244.1.2:40079 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125376s
	[INFO] 10.244.0.4:38168 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181114s
	[INFO] 10.244.0.4:60067 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000087147s
	[INFO] 10.244.0.4:47611 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122939s
	[INFO] 10.244.0.4:37626 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121195s
	[INFO] 10.244.1.2:42817 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159509s
	[INFO] 10.244.1.2:33910 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186538s
	[INFO] 10.244.1.2:37929 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109836s
	[INFO] 10.244.0.4:50698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212263s
	[INFO] 10.244.0.4:33166 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100167s
	[INFO] 10.244.1.2:50377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157558s
	[INFO] 10.244.1.2:39491 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132025s
	[INFO] 10.244.1.2:50075 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112028s
	[INFO] 10.244.0.4:58743 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149175s
	[INFO] 10.244.0.4:52796 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114946s
	
	
	==> coredns [9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45239 - 14115 "HINFO IN 5883645869461503498.3950535614037284853. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058516241s
	[INFO] 10.244.1.2:55352 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003252862s
	[INFO] 10.244.0.4:33650 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001640931s
	[INFO] 10.244.0.4:50077 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000621363s
	[INFO] 10.244.1.2:48439 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189187s
	[INFO] 10.244.1.2:39582 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151327s
	[INFO] 10.244.1.2:59539 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140715s
	[INFO] 10.244.0.4:42999 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177514s
	[INFO] 10.244.0.4:36769 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010694753s
	[INFO] 10.244.0.4:53074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158932s
	[INFO] 10.244.0.4:57223 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012213s
	[INFO] 10.244.1.2:50810 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176678s
	[INFO] 10.244.0.4:58045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142445s
	[INFO] 10.244.0.4:39777 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123555s
	[INFO] 10.244.1.2:59022 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148853s
	[INFO] 10.244.0.4:45136 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001657s
	[INFO] 10.244.0.4:37711 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134332s
	
	
	==> describe nodes <==
	Name:               ha-472903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_56_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:56:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:12:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:08:42 +0000   Tue, 16 Sep 2025 23:56:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-472903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac22e2ab5b0349cdb9474983aa23278e
	  System UUID:                695af4c7-28fb-4299-9454-75db3262ca2c
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6hrm6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-c94hz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 coredns-66bc5c9577-qn8m7             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 etcd-ha-472903                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-lh7dv                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-472903             250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-472903    200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-d4m8f                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-472903             100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-472903                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           59s                node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	
	
	Name:               ha-472903-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:12:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:11:52 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:11:52 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:11:52 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:11:52 +0000   Tue, 16 Sep 2025 23:57:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-472903-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee73b5448da14bee956d19ebaab36496
	  System UUID:                85df9db8-f21a-4038-9f8c-4cc1d81dc0d5
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-4jfjt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-472903-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         15m
	  kube-system                 kindnet-q7c7s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-472903-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-472903-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-58lkb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-472903-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-472903-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  RegisteredNode           15m                node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node ha-472903-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)  kubelet          Node ha-472903-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x7 over 65s)  kubelet          Node ha-472903-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  65s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           59s                node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	
	
	Name:               ha-472903-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:12:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:09:45 +0000   Tue, 16 Sep 2025 23:57:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-472903-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 9964c713c65f4333be8a877aab744040
	  System UUID:                7eb7f2ee-a32d-4876-a4ad-58f745b9c377
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-mknzs                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-472903-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         15m
	  kube-system                 kindnet-x6twd                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-472903-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-472903-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-kn6nb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-472903-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-472903-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  15m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode  15m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode  14m   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode  59s   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 e8 75 4b 01 57 08 06
	[  +0.025562] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[ +13.150028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 5c f0 26 cd ba 08 06
	[  +0.000341] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 20 90 fb f5 d8 08 06
	[ +28.639349] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 26 63 8d db 90 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[  +0.836892] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 cc 9b 52 38 94 08 06
	[  +0.080327] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	[Sep16 23:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[ +20.325550] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 39 4b 41 df 63 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[  +8.925776] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e cd c1 f7 dc c8 08 06
	[  +0.000373] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	
	
	==> etcd [23c0af0bdbe9526d53769461ed9f80d8c743b02e625b65cce39c888f5e7d4b4e] <==
	{"level":"warn","ts":"2025-09-17T00:11:49.018800Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:11:49.020276Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:11:49.032013Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b"}
	{"level":"warn","ts":"2025-09-17T00:11:49.078730Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:11:49.178322Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:11:49.278625Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:11:49.378568Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:11:49.478951Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:11:49.552284Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:11:49.578390Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:11:49.608311Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:11:49.678510Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:11:49.681366Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:11:49.711773Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:11:49.713284Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:11:49.778145Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:11:49.878815Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:11:49.920121Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"info","ts":"2025-09-17T00:11:51.362871Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"3aa85cdcd5e5557b","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-17T00:11:51.362931Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"3aa85cdcd5e5557b"}
	{"level":"info","ts":"2025-09-17T00:11:51.362970Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b"}
	{"level":"info","ts":"2025-09-17T00:11:51.363333Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"3aa85cdcd5e5557b","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-17T00:11:51.363368Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b"}
	{"level":"info","ts":"2025-09-17T00:11:51.374357Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b"}
	{"level":"info","ts":"2025-09-17T00:11:51.375727Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"3aa85cdcd5e5557b"}
	
	
	==> kernel <==
	 00:12:55 up  2:55,  0 users,  load average: 0.59, 0.50, 0.80
	Linux ha-472903 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [cc69d2451cb65860b5bc78e027be2fc1cb0f9fa6542b4abe3bc1ff1c90a8fe60] <==
	I0917 00:12:07.504215       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:12:17.509075       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:12:17.509122       1 main.go:301] handling current node
	I0917 00:12:17.509138       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:12:17.509143       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:12:17.509371       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:12:17.509383       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:12:27.503609       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:12:27.503658       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:12:27.503855       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:12:27.503869       1 main.go:301] handling current node
	I0917 00:12:27.503883       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:12:27.503889       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:12:37.507295       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:12:37.507338       1 main.go:301] handling current node
	I0917 00:12:37.507353       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:12:37.507359       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:12:37.507565       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:12:37.507578       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:12:47.503578       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:12:47.503630       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:12:47.503841       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:12:47.503857       1 main.go:301] handling current node
	I0917 00:12:47.503874       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:12:47.503882       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0aba62132d764965d8e1a80a4a6345bb7e34892b23143da4a7af3450cd465d6c] <==
	I0917 00:06:47.441344       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:07:34.732036       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:07:42.022448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:08:46.236959       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:08:51.159386       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:52.603432       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:53.014406       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0917 00:10:41.954540       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37534: use of closed network connection
	E0917 00:10:42.122977       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37556: use of closed network connection
	E0917 00:10:42.250606       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37572: use of closed network connection
	E0917 00:10:42.442469       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37584: use of closed network connection
	E0917 00:10:42.605380       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37602: use of closed network connection
	E0917 00:10:42.730284       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37612: use of closed network connection
	E0917 00:10:42.884291       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37626: use of closed network connection
	E0917 00:10:43.036952       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37644: use of closed network connection
	E0917 00:10:43.161098       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37658: use of closed network connection
	E0917 00:10:45.408563       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37722: use of closed network connection
	E0917 00:10:45.568465       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37752: use of closed network connection
	E0917 00:10:45.727267       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37770: use of closed network connection
	E0917 00:10:45.883182       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37790: use of closed network connection
	E0917 00:10:46.004301       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37814: use of closed network connection
	I0917 00:10:57.282648       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:10:57.462257       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:11:57.709186       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:12:01.641423       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [087290a41f59caa4f9bc89759bcec6cf90f47c8a2ab83b7c671a8fff35641df9] <==
	I0916 23:56:54.728442       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0916 23:56:54.728466       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:56:54.728485       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0916 23:56:54.728644       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0916 23:56:54.728665       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0916 23:56:54.728648       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0916 23:56:54.728914       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0916 23:56:54.730175       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0916 23:56:54.730201       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0916 23:56:54.732432       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:56:54.733452       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:56:54.735655       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:56:54.735714       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:56:54.735760       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:56:54.735767       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:56:54.735772       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:56:54.740680       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903" podCIDRs=["10.244.0.0/24"]
	I0916 23:56:54.749950       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:22.933124       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m02\" does not exist"
	I0916 23:57:22.943785       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:24.681339       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m02"
	I0916 23:57:51.749676       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m03\" does not exist"
	I0916 23:57:51.772476       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m03" podCIDRs=["10.244.2.0/24"]
	E0916 23:57:51.829801       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"3f5da9fc-6769-4ca8-a715-edeace44c646\", ResourceVersion:\"594\", Generation:1, CreationTimestamp:time.Date(2025, time.September, 16, 23, 56, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00222d0e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"
\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSourc
e)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0021ed7c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdcf8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtu
alDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.34.0\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00144a7e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Re
sourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Life
cycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0019549c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001900b18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ba1200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", Tole
rationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e14570)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001900b70)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailab
le:2, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:57:54.685322       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m03"
	
	
	==> kube-proxy [92dd4d116eb0387dded82fb32d35690ec2d00e3f5e7ac81bf7aea0c6814edd5e] <==
	I0916 23:56:56.831012       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:56:56.891635       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:56:56.991820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:56:56.991862       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:56:56.991952       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:56:57.015955       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:56:57.016001       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:56:57.021120       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:56:57.021457       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:56:57.021499       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:56:57.024872       1 config.go:200] "Starting service config controller"
	I0916 23:56:57.024892       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:56:57.024900       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:56:57.024909       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:56:57.024890       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:56:57.024917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:56:57.024937       1 config.go:309] "Starting node config controller"
	I0916 23:56:57.024942       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:56:57.125608       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:56:57.125691       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0916 23:56:57.125856       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:56:57.125902       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [bba28cace6502de93aa43db4fb51671581c5074990dea721d98d36d839734a67] <==
	E0916 23:56:48.619869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:56:48.649766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:56:48.673092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I0916 23:56:49.170967       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 23:57:51.780040       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:57:51.780142       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	E0916 23:57:51.780183       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	I0916 23:57:51.782132       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:58:37.948695       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	E0916 23:58:37.948846       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 565a634f-ab41-4776-ba5d-63a601bfec48(default/busybox-7b57f96db7-x6xc9) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	E0916 23:58:37.948875       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	I0916 23:58:37.950251       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	I0916 23:58:37.966099       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="47b06c15-c007-4c50-a248-5411a0f4b6a7" pod="default/busybox-7b57f96db7-4jfjt" assumedNode="ha-472903-m02" currentNode="ha-472903"
	E0916 23:58:37.968241       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903"
	E0916 23:58:37.968351       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 47b06c15-c007-4c50-a248-5411a0f4b6a7(default/busybox-7b57f96db7-4jfjt) was assumed on ha-472903 but assigned to ha-472903-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	E0916 23:58:37.968376       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	I0916 23:58:37.969472       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903-m02"
	E0916 23:58:38.002469       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-wp95z" node="ha-472903"
	E0916 23:58:38.002779       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:38.046394       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-xnrsc\" not found" pod="default/busybox-7b57f96db7-xnrsc"
	E0916 23:58:38.046880       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-wp95z\" not found" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:40.050124       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	E0916 23:58:40.050213       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod bd03bad4-af1e-42d0-81fb-6fcaeaa8775e(default/busybox-7b57f96db7-6hrm6) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	E0916 23:58:40.050248       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	I0916 23:58:40.051853       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	
	
	==> kubelet <==
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.235025    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62 podName:cc7a8d10-408f-4655-ac70-54b4af22d9eb nodeName:}" failed. No retries permitted until 2025-09-16 23:58:38.735007966 +0000 UTC m=+109.066439678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hrb62" (UniqueName: "kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62") pod "busybox-7b57f96db7-5pwbb" (UID: "cc7a8d10-408f-4655-ac70-54b4af22d9eb") : failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737179    1676 projected.go:196] Error preparing data for projected volume kube-api-access-xrpwc for pod default/busybox-7b57f96db7-xj7ks: failed to fetch token: pod "busybox-7b57f96db7-xj7ks" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737266    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc podName:cac915f6-7630-4320-b6d2-fd18f3c19a17 nodeName:}" failed. No retries permitted until 2025-09-16 23:58:39.737245356 +0000 UTC m=+110.068677057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xrpwc" (UniqueName: "kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc") pod "busybox-7b57f96db7-xj7ks" (UID: "cac915f6-7630-4320-b6d2-fd18f3c19a17") : failed to fetch token: pod "busybox-7b57f96db7-xj7ks" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737179    1676 projected.go:196] Error preparing data for projected volume kube-api-access-hrb62 for pod default/busybox-7b57f96db7-5pwbb: failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:38 ha-472903 kubelet[1676]: E0916 23:58:38.737371    1676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62 podName:cc7a8d10-408f-4655-ac70-54b4af22d9eb nodeName:}" failed. No retries permitted until 2025-09-16 23:58:39.737351933 +0000 UTC m=+110.068783647 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hrb62" (UniqueName: "kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62") pod "busybox-7b57f96db7-5pwbb" (UID: "cc7a8d10-408f-4655-ac70-54b4af22d9eb") : failed to fetch token: pod "busybox-7b57f96db7-5pwbb" not found
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.027158    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.028111    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.039445    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.042381    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138755    1676 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9njqf\" (UniqueName: \"kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf\") pod \"59b9a23c-498d-4802-9790-70931c4a2c06\" (UID: \"59b9a23c-498d-4802-9790-70931c4a2c06\") "
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138821    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hrb62\" (UniqueName: \"kubernetes.io/projected/cc7a8d10-408f-4655-ac70-54b4af22d9eb-kube-api-access-hrb62\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.138836    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xrpwc\" (UniqueName: \"kubernetes.io/projected/cac915f6-7630-4320-b6d2-fd18f3c19a17-kube-api-access-xrpwc\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.140952    1676 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf" (OuterVolumeSpecName: "kube-api-access-9njqf") pod "59b9a23c-498d-4802-9790-70931c4a2c06" (UID: "59b9a23c-498d-4802-9790-70931c4a2c06"). InnerVolumeSpecName "kube-api-access-9njqf". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.239025    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9njqf\" (UniqueName: \"kubernetes.io/projected/59b9a23c-498d-4802-9790-70931c4a2c06-kube-api-access-9njqf\") on node \"ha-472903\" DevicePath \"\""
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.752137    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: E0916 23:58:39.753199    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.754268    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" path="/var/lib/kubelet/pods/cac915f6-7630-4320-b6d2-fd18f3c19a17/volumes"
	Sep 16 23:58:39 ha-472903 kubelet[1676]: I0916 23:58:39.754475    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" path="/var/lib/kubelet/pods/cc7a8d10-408f-4655-ac70-54b4af22d9eb/volumes"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.056772    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.057611    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.059208    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-5pwbb\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cc7a8d10-408f-4655-ac70-54b4af22d9eb" pod="default/busybox-7b57f96db7-5pwbb"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: E0916 23:58:40.060512    1676 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-xj7ks\" is forbidden: User \"system:node:ha-472903\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-472903' and this object" podUID="cac915f6-7630-4320-b6d2-fd18f3c19a17" pod="default/busybox-7b57f96db7-xj7ks"
	Sep 16 23:58:40 ha-472903 kubelet[1676]: I0916 23:58:40.145054    1676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjkrp\" (UniqueName: \"kubernetes.io/projected/bd03bad4-af1e-42d0-81fb-6fcaeaa8775e-kube-api-access-pjkrp\") pod \"busybox-7b57f96db7-6hrm6\" (UID: \"bd03bad4-af1e-42d0-81fb-6fcaeaa8775e\") " pod="default/busybox-7b57f96db7-6hrm6"
	Sep 16 23:58:41 ha-472903 kubelet[1676]: I0916 23:58:41.754549    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59b9a23c-498d-4802-9790-70931c4a2c06" path="/var/lib/kubelet/pods/59b9a23c-498d-4802-9790-70931c4a2c06/volumes"
	Sep 16 23:58:43 ha-472903 kubelet[1676]: I0916 23:58:43.049200    1676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7b57f96db7-6hrm6" podStartSLOduration=3.061025393 podStartE2EDuration="5.049179166s" podCreationTimestamp="2025-09-16 23:58:38 +0000 UTC" firstStartedPulling="2025-09-16 23:58:40.45690156 +0000 UTC m=+110.788333264" lastFinishedPulling="2025-09-16 23:58:42.445055322 +0000 UTC m=+112.776487037" observedRunningTime="2025-09-16 23:58:43.049092106 +0000 UTC m=+113.380523828" watchObservedRunningTime="2025-09-16 23:58:43.049179166 +0000 UTC m=+113.380610888"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-472903 -n ha-472903
helpers_test.go:269: (dbg) Run:  kubectl --context ha-472903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-mknzs
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-472903 describe pod busybox-7b57f96db7-mknzs
helpers_test.go:290: (dbg) kubectl --context ha-472903 describe pod busybox-7b57f96db7-mknzs:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-mknzs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ha-472903-m03/192.168.49.4
	Start Time:       Tue, 16 Sep 2025 23:58:37 +0000
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmz92 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gmz92:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                  Age                   From               Message
	  ----     ------                  ----                  ----               -------
	  Warning  FailedScheduling        14m                   default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-mknzs": pod busybox-7b57f96db7-mknzs is already assigned to node "ha-472903-m03"
	  Warning  FailedScheduling        14m                   default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-mknzs": pod busybox-7b57f96db7-mknzs is already assigned to node "ha-472903-m03"
	  Normal   Scheduled               14m                   default-scheduler  Successfully assigned default/busybox-7b57f96db7-mknzs to ha-472903-m03
	  Warning  FailedCreatePodSandBox  14m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "72439adc47052c2da00cee62587d780275cf6c2423dee9831567464d4725ee9d": failed to find network info for sandbox "72439adc47052c2da00cee62587d780275cf6c2423dee9831567464d4725ee9d"
	  Warning  FailedCreatePodSandBox  14m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "24ab8b6bd2f38653d2326c375fc81ebf17317e36885547c7b42c011bb95889ed": failed to find network info for sandbox "24ab8b6bd2f38653d2326c375fc81ebf17317e36885547c7b42c011bb95889ed"
	  Warning  FailedCreatePodSandBox  13m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "300fece4c100bc3e68a19e1fa6f46c8a378753727caaaeb1533dab71f234be58": failed to find network info for sandbox "300fece4c100bc3e68a19e1fa6f46c8a378753727caaaeb1533dab71f234be58"
	  Warning  FailedCreatePodSandBox  13m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e49a14b4de5e24fa450a43c124b2916ad7028d35cbc3b0f74595e68ee161d1d0": failed to find network info for sandbox "e49a14b4de5e24fa450a43c124b2916ad7028d35cbc3b0f74595e68ee161d1d0"
	  Warning  FailedCreatePodSandBox  13m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "efa290ca498f7c70ae29d8d97709edda97bc6b062aac05a3ef6d6a83fbd42797": failed to find network info for sandbox "efa290ca498f7c70ae29d8d97709edda97bc6b062aac05a3ef6d6a83fbd42797"
	  Warning  FailedCreatePodSandBox  13m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d5851ce1270b1c8994400ecd7bdabadaf895488957ffb5173dcd7e289db1de6c": failed to find network info for sandbox "d5851ce1270b1c8994400ecd7bdabadaf895488957ffb5173dcd7e289db1de6c"
	  Warning  FailedCreatePodSandBox  13m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "11aaa894ae434b08da8122c8f3445d03b4c1e54dfb071596f63a0e4654f49f10": failed to find network info for sandbox "11aaa894ae434b08da8122c8f3445d03b4c1e54dfb071596f63a0e4654f49f10"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c8126e80126ff891a4935c60cfec55753f6bb51d789c0eb46098b72267c7d53c": failed to find network info for sandbox "c8126e80126ff891a4935c60cfec55753f6bb51d789c0eb46098b72267c7d53c"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1389a2f92f350a6f495c76f80031300b6442a6a0cc67abd4b045ff9150b3fc3a": failed to find network info for sandbox "1389a2f92f350a6f495c76f80031300b6442a6a0cc67abd4b045ff9150b3fc3a"
	  Warning  FailedCreatePodSandBox  4m14s (x38 over 12m)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c3a9afe91461f3ea405980387ac5fab85785c7cf3f180d2b0f894e1df94ca62d": failed to find network info for sandbox "c3a9afe91461f3ea405980387ac5fab85785c7cf3f180d2b0f894e1df94ca62d"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (67.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (421.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-472903 stop --alsologtostderr -v 5: (25.767461736s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 start --wait true --alsologtostderr -v 5
E0917 00:15:37.159350  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:15:49.958597  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:17:13.028302  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 start --wait true --alsologtostderr -v 5: exit status 80 (6m33.682021341s)

                                                
                                                
-- stdout --
	* [ha-472903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-472903" primary control-plane node in "ha-472903" cluster
	* Pulling base image v0.0.48 ...
	* Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	* Enabled addons: 
	
	* Starting "ha-472903-m02" control-plane node in "ha-472903" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-472903-m03" control-plane node in "ha-472903" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	* Starting "ha-472903-m04" worker node in "ha-472903" cluster
	* Pulling base image v0.0.48 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:13:22.953197  838391 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:13:22.953530  838391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:13:22.953542  838391 out.go:374] Setting ErrFile to fd 2...
	I0917 00:13:22.953549  838391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:13:22.953766  838391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:13:22.954306  838391 out.go:368] Setting JSON to false
	I0917 00:13:22.955398  838391 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":10545,"bootTime":1758057458,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:13:22.955520  838391 start.go:140] virtualization: kvm guest
	I0917 00:13:22.957510  838391 out.go:179] * [ha-472903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:13:22.958615  838391 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:13:22.958642  838391 notify.go:220] Checking for updates...
	I0917 00:13:22.960507  838391 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:13:22.961674  838391 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:13:22.962866  838391 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0917 00:13:22.964443  838391 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:13:22.965391  838391 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:13:22.966891  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:22.966986  838391 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:13:22.992446  838391 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:13:22.992525  838391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:13:23.045449  838391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:13:23.034509691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:13:23.045556  838391 docker.go:318] overlay module found
	I0917 00:13:23.047016  838391 out.go:179] * Using the docker driver based on existing profile
	I0917 00:13:23.047922  838391 start.go:304] selected driver: docker
	I0917 00:13:23.047937  838391 start.go:918] validating driver "docker" against &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:13:23.048084  838391 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:13:23.048209  838391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:13:23.101147  838391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:13:23.091009521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:13:23.102012  838391 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:13:23.102057  838391 cni.go:84] Creating CNI manager for ""
	I0917 00:13:23.102129  838391 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:13:23.102195  838391 start.go:348] cluster config:
	{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:13:23.103903  838391 out.go:179] * Starting "ha-472903" primary control-plane node in "ha-472903" cluster
	I0917 00:13:23.104759  838391 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:13:23.105814  838391 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:13:23.106795  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:23.106833  838391 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0917 00:13:23.106844  838391 cache.go:58] Caching tarball of preloaded images
	I0917 00:13:23.106881  838391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:13:23.106921  838391 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:13:23.106932  838391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:13:23.107045  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:23.127051  838391 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:13:23.127078  838391 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:13:23.127093  838391 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:13:23.127117  838391 start.go:360] acquireMachinesLock for ha-472903: {Name:mk994658ce3314f2aed1dec341debc49d36a4326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:13:23.127173  838391 start.go:364] duration metric: took 38.444µs to acquireMachinesLock for "ha-472903"
	I0917 00:13:23.127192  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:13:23.127199  838391 fix.go:54] fixHost starting: 
	I0917 00:13:23.127403  838391 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:13:23.144605  838391 fix.go:112] recreateIfNeeded on ha-472903: state=Stopped err=<nil>
	W0917 00:13:23.144651  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:13:23.146403  838391 out.go:252] * Restarting existing docker container for "ha-472903" ...
	I0917 00:13:23.146471  838391 cli_runner.go:164] Run: docker start ha-472903
	I0917 00:13:23.362855  838391 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:13:23.380820  838391 kic.go:430] container "ha-472903" state is running.
	I0917 00:13:23.381209  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:13:23.398851  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:23.399057  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:13:23.399113  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:23.416213  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:23.416490  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I0917 00:13:23.416505  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:13:23.417056  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37384->127.0.0.1:33574: read: connection reset by peer
	I0917 00:13:26.554176  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0917 00:13:26.554202  838391 ubuntu.go:182] provisioning hostname "ha-472903"
	I0917 00:13:26.554275  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:26.572576  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:26.572800  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I0917 00:13:26.572813  838391 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903 && echo "ha-472903" | sudo tee /etc/hostname
	I0917 00:13:26.719562  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0917 00:13:26.719659  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:26.737757  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:26.738008  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I0917 00:13:26.738032  838391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:13:26.872954  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:13:26.872993  838391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:13:26.873020  838391 ubuntu.go:190] setting up certificates
	I0917 00:13:26.873033  838391 provision.go:84] configureAuth start
	I0917 00:13:26.873086  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:13:26.891066  838391 provision.go:143] copyHostCerts
	I0917 00:13:26.891111  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:26.891147  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:13:26.891169  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:26.891262  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:13:26.891384  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:26.891432  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:13:26.891443  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:26.891485  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:13:26.891575  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:26.891600  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:13:26.891610  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:26.891648  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:13:26.891725  838391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903 san=[127.0.0.1 192.168.49.2 ha-472903 localhost minikube]
	I0917 00:13:27.127844  838391 provision.go:177] copyRemoteCerts
	I0917 00:13:27.127908  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:13:27.127972  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.146507  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.243455  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:13:27.243525  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:13:27.269313  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:13:27.269382  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 00:13:27.294966  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:13:27.295048  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:13:27.320815  838391 provision.go:87] duration metric: took 447.761849ms to configureAuth
	I0917 00:13:27.320860  838391 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:13:27.321072  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:27.321085  838391 machine.go:96] duration metric: took 3.922015218s to provisionDockerMachine
	I0917 00:13:27.321092  838391 start.go:293] postStartSetup for "ha-472903" (driver="docker")
	I0917 00:13:27.321102  838391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:13:27.321150  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:13:27.321188  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.339742  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.437715  838391 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:13:27.441468  838391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:13:27.441498  838391 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:13:27.441506  838391 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:13:27.441513  838391 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:13:27.441524  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:13:27.441576  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:13:27.441647  838391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:13:27.441657  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0917 00:13:27.441747  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:13:27.451010  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:27.477190  838391 start.go:296] duration metric: took 156.078591ms for postStartSetup
	I0917 00:13:27.477273  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:13:27.477311  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.495838  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.588631  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:13:27.593367  838391 fix.go:56] duration metric: took 4.46615876s for fixHost
	I0917 00:13:27.593398  838391 start.go:83] releasing machines lock for "ha-472903", held for 4.466212718s
	I0917 00:13:27.593488  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:13:27.611894  838391 ssh_runner.go:195] Run: cat /version.json
	I0917 00:13:27.611963  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.611984  838391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:13:27.612068  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.630790  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.632015  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.723564  838391 ssh_runner.go:195] Run: systemctl --version
	I0917 00:13:27.805571  838391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:13:27.810704  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:13:27.829982  838391 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:13:27.830056  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:13:27.839307  838391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:13:27.839334  838391 start.go:495] detecting cgroup driver to use...
	I0917 00:13:27.839374  838391 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:13:27.839455  838391 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:13:27.853620  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:13:27.866086  838391 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:13:27.866143  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:13:27.879568  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:13:27.891699  838391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:13:27.957039  838391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:13:28.019649  838391 docker.go:234] disabling docker service ...
	I0917 00:13:28.019719  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:13:28.032725  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:13:28.045044  838391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:13:28.110090  838391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:13:28.176290  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:13:28.188485  838391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:13:28.206191  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:13:28.216912  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:13:28.227586  838391 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:13:28.227653  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:13:28.238198  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:28.248607  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:13:28.258883  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:28.269300  838391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:13:28.279692  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:13:28.290638  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:13:28.301524  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:13:28.312695  838391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:13:28.321821  838391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:13:28.331494  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:28.395408  838391 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:13:28.510345  838391 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:13:28.510442  838391 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:13:28.514486  838391 start.go:563] Will wait 60s for crictl version
	I0917 00:13:28.514543  838391 ssh_runner.go:195] Run: which crictl
	I0917 00:13:28.518058  838391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:13:28.553392  838391 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:13:28.553470  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:28.578186  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:28.607037  838391 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:13:28.608343  838391 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:13:28.625981  838391 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:13:28.630074  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:28.642270  838391 kubeadm.go:875] updating cluster {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:13:28.642447  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:28.642500  838391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:13:28.677502  838391 containerd.go:627] all images are preloaded for containerd runtime.
	I0917 00:13:28.677528  838391 containerd.go:534] Images already preloaded, skipping extraction
	I0917 00:13:28.677596  838391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:13:28.711767  838391 containerd.go:627] all images are preloaded for containerd runtime.
	I0917 00:13:28.711790  838391 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:13:28.711799  838391 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0917 00:13:28.711898  838391 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:13:28.711952  838391 ssh_runner.go:195] Run: sudo crictl info
	I0917 00:13:28.748238  838391 cni.go:84] Creating CNI manager for ""
	I0917 00:13:28.748269  838391 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:13:28.748282  838391 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:13:28.748301  838391 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-472903 NodeName:ha-472903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:13:28.748434  838391 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-472903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:13:28.748456  838391 kube-vip.go:115] generating kube-vip config ...
	I0917 00:13:28.748504  838391 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:13:28.761835  838391 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:13:28.761950  838391 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:13:28.762005  838391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:13:28.771377  838391 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:13:28.771466  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:13:28.780815  838391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0917 00:13:28.799673  838391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:13:28.818695  838391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0917 00:13:28.837443  838391 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:13:28.856629  838391 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:13:28.860342  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:28.871978  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:28.937920  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:28.965162  838391 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.2
	I0917 00:13:28.965183  838391 certs.go:194] generating shared ca certs ...
	I0917 00:13:28.965200  838391 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:28.965352  838391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:13:28.965429  838391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:13:28.965446  838391 certs.go:256] generating profile certs ...
	I0917 00:13:28.965567  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0917 00:13:28.965609  838391 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.8acf531c
	I0917 00:13:28.965631  838391 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.8acf531c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0917 00:13:28.981661  838391 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.8acf531c ...
	I0917 00:13:28.981698  838391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.8acf531c: {Name:mkdef0e1cbf73e7227a698510b51d68a698391c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:28.981868  838391 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.8acf531c ...
	I0917 00:13:28.981880  838391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.8acf531c: {Name:mk80b61f5fe8d635199050a211c5a719c4b8f9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:28.981959  838391 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.8acf531c -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0917 00:13:28.982123  838391 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.8acf531c -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0917 00:13:28.982267  838391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0917 00:13:28.982283  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:13:28.982296  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:13:28.982309  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:13:28.982327  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:13:28.982340  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:13:28.982352  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:13:28.982367  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:13:28.982379  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:13:28.982446  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:13:28.982481  838391 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:13:28.982491  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:13:28.982517  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:13:28.982539  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:13:28.982559  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:13:28.982598  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:28.982624  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0917 00:13:28.982638  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0917 00:13:28.982650  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:28.983259  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:13:29.011855  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:13:29.044116  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:13:29.076632  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:13:29.102081  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:13:29.127618  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:13:29.154054  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:13:29.181152  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:13:29.207152  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:13:29.234803  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:13:29.261065  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:13:29.285817  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:13:29.304802  838391 ssh_runner.go:195] Run: openssl version
	I0917 00:13:29.310548  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:13:29.321280  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:13:29.325168  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:13:29.325220  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:13:29.332550  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:13:29.342450  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:13:29.352677  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:29.356484  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:29.356557  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:29.363671  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:13:29.373502  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:13:29.383350  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:13:29.386969  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:13:29.387020  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:13:29.393845  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:13:29.402996  838391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:13:29.406679  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:13:29.413276  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:13:29.420039  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:13:29.426813  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:13:29.433710  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:13:29.440812  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:13:29.447756  838391 kubeadm.go:392] StartCluster: {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:13:29.447896  838391 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0917 00:13:29.447983  838391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:13:29.484343  838391 cri.go:89] found id: "5a5a17cca6c0a643b6c0881dab5508dcb7de8e6ad77d7e6ecb81d434ab2cc8a1"
	I0917 00:13:29.484364  838391 cri.go:89] found id: "8683544e2a9d579448e28b8f33653e2c8d1315b2d07bd7b4ce574428d93c6f3a"
	I0917 00:13:29.484368  838391 cri.go:89] found id: "9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315"
	I0917 00:13:29.484373  838391 cri.go:89] found id: "3b457407f10e357ce33da7fa3fb4333f8312f0d3e3570cf8528cdcac8f5a1d0f"
	I0917 00:13:29.484376  838391 cri.go:89] found id: "cc69d2451cb65860b5bc78e027be2fc1cb0f9fa6542b4abe3bc1ff1c90a8fe60"
	I0917 00:13:29.484379  838391 cri.go:89] found id: "92dd4d116eb0387dded82fb32d35690ec2d00e3f5e7ac81bf7aea0c6814edd5e"
	I0917 00:13:29.484382  838391 cri.go:89] found id: "bba28cace6502de93aa43db4fb51671581c5074990dea721d98d36d839734a67"
	I0917 00:13:29.484384  838391 cri.go:89] found id: "087290a41f59caa4f9bc89759bcec6cf90f47c8a2ab83b7c671a8fff35641df9"
	I0917 00:13:29.484387  838391 cri.go:89] found id: "0aba62132d764965d8e1a80a4a6345bb7e34892b23143da4a7af3450cd465d6c"
	I0917 00:13:29.484395  838391 cri.go:89] found id: "23c0af0bdbe9526d53769461ed9f80d8c743b02e625b65cce39c888f5e7d4b4e"
	I0917 00:13:29.484398  838391 cri.go:89] found id: ""
	I0917 00:13:29.484470  838391 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0917 00:13:29.498073  838391 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T00:13:29Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0917 00:13:29.498177  838391 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:13:29.508791  838391 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:13:29.508813  838391 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:13:29.508861  838391 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:13:29.519962  838391 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:13:29.520528  838391 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-472903" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:13:29.520700  838391 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-749120/kubeconfig needs updating (will repair): [kubeconfig missing "ha-472903" cluster setting kubeconfig missing "ha-472903" context setting]
	I0917 00:13:29.521229  838391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:29.521963  838391 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:13:29.522552  838391 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:13:29.522579  838391 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:13:29.522586  838391 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:13:29.522592  838391 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:13:29.522598  838391 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:13:29.522631  838391 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:13:29.523130  838391 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:13:29.536212  838391 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:13:29.536248  838391 kubeadm.go:593] duration metric: took 27.419363ms to restartPrimaryControlPlane
	I0917 00:13:29.536260  838391 kubeadm.go:394] duration metric: took 88.513961ms to StartCluster
	I0917 00:13:29.536281  838391 settings.go:142] acquiring lock: {Name:mk6c1a5bee23e141aad5180323c16c47ed580ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:29.536352  838391 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:13:29.537180  838391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:29.537465  838391 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:13:29.537498  838391 start.go:241] waiting for startup goroutines ...
	I0917 00:13:29.537509  838391 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:13:29.537779  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:29.539896  838391 out.go:179] * Enabled addons: 
	I0917 00:13:29.541345  838391 addons.go:514] duration metric: took 3.828487ms for enable addons: enabled=[]
	I0917 00:13:29.541404  838391 start.go:246] waiting for cluster config update ...
	I0917 00:13:29.541459  838391 start.go:255] writing updated cluster config ...
	I0917 00:13:29.543184  838391 out.go:203] 
	I0917 00:13:29.548360  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:29.548520  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:29.550284  838391 out.go:179] * Starting "ha-472903-m02" control-plane node in "ha-472903" cluster
	I0917 00:13:29.551514  838391 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:13:29.552445  838391 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:13:29.554184  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:29.554221  838391 cache.go:58] Caching tarball of preloaded images
	I0917 00:13:29.554326  838391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:13:29.554361  838391 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:13:29.554376  838391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:13:29.554541  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:29.581238  838391 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:13:29.581265  838391 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:13:29.581286  838391 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:13:29.581322  838391 start.go:360] acquireMachinesLock for ha-472903-m02: {Name:mk81d8c73856cf84ceff1767a1681f3f3cdab773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:13:29.581402  838391 start.go:364] duration metric: took 53.081µs to acquireMachinesLock for "ha-472903-m02"
	I0917 00:13:29.581447  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:13:29.581461  838391 fix.go:54] fixHost starting: m02
	I0917 00:13:29.581795  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:13:29.604878  838391 fix.go:112] recreateIfNeeded on ha-472903-m02: state=Stopped err=<nil>
	W0917 00:13:29.604915  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:13:29.607517  838391 out.go:252] * Restarting existing docker container for "ha-472903-m02" ...
	I0917 00:13:29.607600  838391 cli_runner.go:164] Run: docker start ha-472903-m02
	I0917 00:13:29.911119  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:13:29.930731  838391 kic.go:430] container "ha-472903-m02" state is running.
	I0917 00:13:29.931116  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:13:29.951026  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:29.951305  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:13:29.951370  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:29.974010  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:29.974330  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I0917 00:13:29.974348  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:13:29.975092  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37012->127.0.0.1:33579: read: connection reset by peer
	I0917 00:13:33.111351  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0917 00:13:33.111379  838391 ubuntu.go:182] provisioning hostname "ha-472903-m02"
	I0917 00:13:33.111466  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:33.129914  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:33.130125  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I0917 00:13:33.130138  838391 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m02 && echo "ha-472903-m02" | sudo tee /etc/hostname
	I0917 00:13:33.276390  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0917 00:13:33.276473  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:33.295322  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:33.295578  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I0917 00:13:33.295626  838391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:13:33.430221  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:13:33.430255  838391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:13:33.430276  838391 ubuntu.go:190] setting up certificates
	I0917 00:13:33.430293  838391 provision.go:84] configureAuth start
	I0917 00:13:33.430347  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:13:33.447859  838391 provision.go:143] copyHostCerts
	I0917 00:13:33.447896  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:33.447924  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:13:33.447931  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:33.447997  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:13:33.448082  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:33.448101  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:13:33.448105  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:33.448129  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:13:33.448171  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:33.448188  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:13:33.448194  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:33.448221  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:13:33.448284  838391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m02 san=[127.0.0.1 192.168.49.3 ha-472903-m02 localhost minikube]
	I0917 00:13:33.772202  838391 provision.go:177] copyRemoteCerts
	I0917 00:13:33.772271  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:13:33.772308  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:33.790580  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:33.888743  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:13:33.888811  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:13:33.915641  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:13:33.915714  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:13:33.947505  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:13:33.947576  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:13:33.982626  838391 provision.go:87] duration metric: took 552.315533ms to configureAuth
	I0917 00:13:33.982666  838391 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:13:33.983009  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:33.983035  838391 machine.go:96] duration metric: took 4.031716501s to provisionDockerMachine
	I0917 00:13:33.983048  838391 start.go:293] postStartSetup for "ha-472903-m02" (driver="docker")
	I0917 00:13:33.983079  838391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:13:33.983149  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:13:33.983189  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:34.006390  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:34.114836  838391 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:13:34.122569  838391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:13:34.122609  838391 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:13:34.122622  838391 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:13:34.122631  838391 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:13:34.122648  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:13:34.122715  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:13:34.122819  838391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:13:34.122842  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0917 00:13:34.122963  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:13:34.133119  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:34.163792  838391 start.go:296] duration metric: took 180.726136ms for postStartSetup
	I0917 00:13:34.163881  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:13:34.163931  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:34.187017  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:34.289000  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:13:34.295122  838391 fix.go:56] duration metric: took 4.713651457s for fixHost
	I0917 00:13:34.295149  838391 start.go:83] releasing machines lock for "ha-472903-m02", held for 4.713713361s
	I0917 00:13:34.295238  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:13:34.323055  838391 out.go:179] * Found network options:
	I0917 00:13:34.324886  838391 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:13:34.326740  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:13:34.326797  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:13:34.326881  838391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:13:34.326949  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:34.327068  838391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:13:34.327142  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:34.349495  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:34.351023  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:34.450454  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:13:34.547618  838391 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:13:34.547706  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:13:34.558822  838391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:13:34.558854  838391 start.go:495] detecting cgroup driver to use...
	I0917 00:13:34.558889  838391 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:13:34.558939  838391 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:13:34.584135  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:13:34.599048  838391 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:13:34.599118  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:13:34.615043  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:13:34.627813  838391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:13:34.751575  838391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:13:34.913336  838391 docker.go:234] disabling docker service ...
	I0917 00:13:34.913429  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:13:34.943843  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:13:34.964995  838391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:13:35.154858  838391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:13:35.276803  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:13:35.292337  838391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:13:35.312501  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:13:35.325061  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:13:35.337094  838391 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:13:35.337162  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:13:35.349635  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:35.361644  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:13:35.373144  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:35.385968  838391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:13:35.397684  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:13:35.409662  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:13:35.422089  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:13:35.433950  838391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:13:35.445355  838391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:13:35.456096  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:35.554404  838391 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:13:35.775103  838391 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:13:35.775175  838391 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:13:35.780034  838391 start.go:563] Will wait 60s for crictl version
	I0917 00:13:35.780106  838391 ssh_runner.go:195] Run: which crictl
	I0917 00:13:35.784109  838391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:13:35.826151  838391 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:13:35.826224  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:35.852960  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:35.877876  838391 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:13:35.879103  838391 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:13:35.880100  838391 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:13:35.897195  838391 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:13:35.901082  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:35.912748  838391 mustload.go:65] Loading cluster: ha-472903
	I0917 00:13:35.912967  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:35.913168  838391 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:13:35.931969  838391 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:13:35.932217  838391 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.3
	I0917 00:13:35.932230  838391 certs.go:194] generating shared ca certs ...
	I0917 00:13:35.932244  838391 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:35.932358  838391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:13:35.932394  838391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:13:35.932404  838391 certs.go:256] generating profile certs ...
	I0917 00:13:35.932495  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0917 00:13:35.932546  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.b92722b6
	I0917 00:13:35.932585  838391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0917 00:13:35.932596  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:13:35.932607  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:13:35.932619  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:13:35.932630  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:13:35.932643  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:13:35.932656  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:13:35.932668  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:13:35.932681  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:13:35.932726  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:13:35.932752  838391 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:13:35.932761  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:13:35.932781  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:13:35.932801  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:13:35.932822  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:13:35.932861  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:35.932888  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0917 00:13:35.932902  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0917 00:13:35.932914  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:35.932957  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:35.950361  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:36.038689  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:13:36.046320  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:13:36.065517  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:13:36.070746  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:13:36.088267  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:13:36.093060  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:13:36.109798  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:13:36.114630  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:13:36.132250  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:13:36.137979  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:13:36.158118  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:13:36.163359  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 00:13:36.183892  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:13:36.221052  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:13:36.260302  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:13:36.294497  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:13:36.328388  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:13:36.364809  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:13:36.406406  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:13:36.458823  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:13:36.524795  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:13:36.572655  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:13:36.619864  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:13:36.672387  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:13:36.709674  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:13:36.746751  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:13:36.783161  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:13:36.813099  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:13:36.837070  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 00:13:36.858764  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:13:36.877818  838391 ssh_runner.go:195] Run: openssl version
	I0917 00:13:36.883443  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:13:36.894826  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:13:36.899068  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:13:36.899146  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:13:36.907246  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:13:36.916910  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:13:36.927032  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:36.930914  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:36.930968  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:36.940300  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:13:36.953573  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:13:36.967306  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:13:36.971796  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:13:36.971852  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:13:36.981091  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:13:36.991490  838391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:13:36.995167  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:13:37.003067  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:13:37.009863  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:13:37.016575  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:13:37.023485  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:13:37.032694  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:13:37.042763  838391 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0917 00:13:37.042877  838391 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:13:37.042911  838391 kube-vip.go:115] generating kube-vip config ...
	I0917 00:13:37.042948  838391 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:13:37.060530  838391 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:13:37.060601  838391 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:13:37.060658  838391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:13:37.072293  838391 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:13:37.072371  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:13:37.084220  838391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 00:13:37.109777  838391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:13:37.137135  838391 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:13:37.165385  838391 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:13:37.170106  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:37.186447  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:37.337215  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:37.351480  838391 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:13:37.351795  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:37.353499  838391 out.go:179] * Verifying Kubernetes components...
	I0917 00:13:37.354663  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:37.476140  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:37.492755  838391 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:13:37.492840  838391 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:13:37.493129  838391 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m02" to be "Ready" ...
	I0917 00:13:37.501768  838391 node_ready.go:49] node "ha-472903-m02" is "Ready"
	I0917 00:13:37.501795  838391 node_ready.go:38] duration metric: took 8.646756ms for node "ha-472903-m02" to be "Ready" ...
	I0917 00:13:37.501810  838391 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:13:37.501850  838391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:13:37.513878  838391 api_server.go:72] duration metric: took 162.352734ms to wait for apiserver process to appear ...
	I0917 00:13:37.513902  838391 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:13:37.513995  838391 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:13:37.519494  838391 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:13:37.520502  838391 api_server.go:141] control plane version: v1.34.0
	I0917 00:13:37.520525  838391 api_server.go:131] duration metric: took 6.615829ms to wait for apiserver health ...
	I0917 00:13:37.520533  838391 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:13:37.529003  838391 system_pods.go:59] 24 kube-system pods found
	I0917 00:13:37.529040  838391 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:37.529049  838391 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:37.529058  838391 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:37.529064  838391 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:37.529068  838391 system_pods.go:61] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running
	I0917 00:13:37.529072  838391 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:13:37.529075  838391 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0917 00:13:37.529078  838391 system_pods.go:61] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:13:37.529083  838391 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:37.529092  838391 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:37.529096  838391 system_pods.go:61] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running
	I0917 00:13:37.529102  838391 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:37.529110  838391 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:37.529113  838391 system_pods.go:61] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running
	I0917 00:13:37.529118  838391 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0917 00:13:37.529121  838391 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:13:37.529125  838391 system_pods.go:61] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:13:37.529131  838391 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:37.529136  838391 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:37.529144  838391 system_pods.go:61] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running
	I0917 00:13:37.529147  838391 system_pods.go:61] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:13:37.529150  838391 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:13:37.529153  838391 system_pods.go:61] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:13:37.529156  838391 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:13:37.529161  838391 system_pods.go:74] duration metric: took 8.623694ms to wait for pod list to return data ...
	I0917 00:13:37.529167  838391 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:13:37.531877  838391 default_sa.go:45] found service account: "default"
	I0917 00:13:37.531901  838391 default_sa.go:55] duration metric: took 2.728819ms for default service account to be created ...
	I0917 00:13:37.531910  838391 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:13:37.538254  838391 system_pods.go:86] 24 kube-system pods found
	I0917 00:13:37.538287  838391 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:37.538298  838391 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:37.538308  838391 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:37.538315  838391 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:37.538321  838391 system_pods.go:89] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running
	I0917 00:13:37.538327  838391 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:13:37.538333  838391 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0917 00:13:37.538340  838391 system_pods.go:89] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:13:37.538353  838391 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:37.538366  838391 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:37.538373  838391 system_pods.go:89] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running
	I0917 00:13:37.538383  838391 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:37.538396  838391 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:37.538406  838391 system_pods.go:89] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running
	I0917 00:13:37.538447  838391 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0917 00:13:37.538457  838391 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:13:37.538465  838391 system_pods.go:89] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:13:37.538479  838391 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:37.538492  838391 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:37.538504  838391 system_pods.go:89] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running
	I0917 00:13:37.538511  838391 system_pods.go:89] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:13:37.538517  838391 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:13:37.538523  838391 system_pods.go:89] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:13:37.538528  838391 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:13:37.538538  838391 system_pods.go:126] duration metric: took 6.620318ms to wait for k8s-apps to be running ...
	I0917 00:13:37.538550  838391 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:13:37.538595  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:13:37.551380  838391 system_svc.go:56] duration metric: took 12.817524ms WaitForService to wait for kubelet
	I0917 00:13:37.551421  838391 kubeadm.go:578] duration metric: took 199.889741ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:13:37.551446  838391 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:13:37.554601  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:37.554630  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:37.554646  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:37.554651  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:37.554657  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:37.554661  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:37.554667  838391 node_conditions.go:105] duration metric: took 3.21568ms to run NodePressure ...
	I0917 00:13:37.554682  838391 start.go:241] waiting for startup goroutines ...
	I0917 00:13:37.554713  838391 start.go:255] writing updated cluster config ...
	I0917 00:13:37.556785  838391 out.go:203] 
	I0917 00:13:37.558118  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:37.558205  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:37.560287  838391 out.go:179] * Starting "ha-472903-m03" control-plane node in "ha-472903" cluster
	I0917 00:13:37.561674  838391 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:13:37.562756  838391 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:13:37.563720  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:37.563746  838391 cache.go:58] Caching tarball of preloaded images
	I0917 00:13:37.563814  838391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:13:37.563852  838391 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:13:37.563866  838391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:13:37.563958  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:37.584605  838391 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:13:37.584624  838391 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:13:37.584638  838391 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:13:37.584670  838391 start.go:360] acquireMachinesLock for ha-472903-m03: {Name:mk61000bb8e4699ca3310a7fc257e30a156b69de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:13:37.584735  838391 start.go:364] duration metric: took 44.453µs to acquireMachinesLock for "ha-472903-m03"
	I0917 00:13:37.584761  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:13:37.584768  838391 fix.go:54] fixHost starting: m03
	I0917 00:13:37.585018  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:13:37.604118  838391 fix.go:112] recreateIfNeeded on ha-472903-m03: state=Stopped err=<nil>
	W0917 00:13:37.604141  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:13:37.606555  838391 out.go:252] * Restarting existing docker container for "ha-472903-m03" ...
	I0917 00:13:37.606618  838391 cli_runner.go:164] Run: docker start ha-472903-m03
	I0917 00:13:37.854742  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:13:37.873167  838391 kic.go:430] container "ha-472903-m03" state is running.
	I0917 00:13:37.873554  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:13:37.894030  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:37.894294  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:13:37.894371  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:37.912571  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:37.912785  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33584 <nil> <nil>}
	I0917 00:13:37.912796  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:13:37.913480  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50250->127.0.0.1:33584: read: connection reset by peer
	I0917 00:13:41.078339  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0917 00:13:41.078371  838391 ubuntu.go:182] provisioning hostname "ha-472903-m03"
	I0917 00:13:41.078468  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:41.099623  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:41.099906  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33584 <nil> <nil>}
	I0917 00:13:41.099929  838391 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m03 && echo "ha-472903-m03" | sudo tee /etc/hostname
	I0917 00:13:41.256611  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0917 00:13:41.256681  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:41.275951  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:41.276266  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33584 <nil> <nil>}
	I0917 00:13:41.276291  838391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:13:41.413177  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:13:41.413213  838391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:13:41.413235  838391 ubuntu.go:190] setting up certificates
	I0917 00:13:41.413252  838391 provision.go:84] configureAuth start
	I0917 00:13:41.413326  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:13:41.432242  838391 provision.go:143] copyHostCerts
	I0917 00:13:41.432284  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:41.432323  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:13:41.432334  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:41.432427  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:13:41.432522  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:41.432547  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:13:41.432556  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:41.432591  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:13:41.432652  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:41.432676  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:13:41.432684  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:41.432717  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:13:41.432785  838391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m03 san=[127.0.0.1 192.168.49.4 ha-472903-m03 localhost minikube]
	I0917 00:13:41.862573  838391 provision.go:177] copyRemoteCerts
	I0917 00:13:41.862629  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:13:41.862665  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:41.885400  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:41.994335  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:13:41.994423  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:13:42.028538  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:13:42.028607  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:13:42.067649  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:13:42.067726  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:13:42.099602  838391 provision.go:87] duration metric: took 686.33067ms to configureAuth
	I0917 00:13:42.099636  838391 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:13:42.099920  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:42.099938  838391 machine.go:96] duration metric: took 4.205627363s to provisionDockerMachine
	I0917 00:13:42.099950  838391 start.go:293] postStartSetup for "ha-472903-m03" (driver="docker")
	I0917 00:13:42.099962  838391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:13:42.100117  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:13:42.100183  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:42.122141  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:42.233836  838391 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:13:42.238854  838391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:13:42.238889  838391 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:13:42.238900  838391 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:13:42.238908  838391 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:13:42.238924  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:13:42.238985  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:13:42.239080  838391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:13:42.239088  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0917 00:13:42.239207  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:13:42.256636  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:42.284884  838391 start.go:296] duration metric: took 184.914637ms for postStartSetup
	I0917 00:13:42.284980  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:13:42.285038  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:42.306309  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:42.403953  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:13:42.409407  838391 fix.go:56] duration metric: took 4.824632112s for fixHost
	I0917 00:13:42.409462  838391 start.go:83] releasing machines lock for "ha-472903-m03", held for 4.824710137s
	I0917 00:13:42.409541  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:13:42.432198  838391 out.go:179] * Found network options:
	I0917 00:13:42.433393  838391 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0917 00:13:42.434713  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:13:42.434749  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:13:42.434778  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:13:42.434796  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:13:42.434873  838391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:13:42.434927  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:42.434964  838391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:13:42.435037  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:42.456445  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:42.457637  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:42.649452  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:13:42.669255  838391 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:13:42.669336  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:13:42.678466  838391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:13:42.678490  838391 start.go:495] detecting cgroup driver to use...
	I0917 00:13:42.678537  838391 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:13:42.678593  838391 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:13:42.694034  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:13:42.706095  838391 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:13:42.706148  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:13:42.720214  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:13:42.731568  838391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:13:42.844067  838391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:13:42.990517  838391 docker.go:234] disabling docker service ...
	I0917 00:13:42.990597  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:13:43.009784  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:13:43.025954  838391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:13:43.175561  838391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:13:43.288802  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:13:43.302127  838391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:13:43.320551  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:13:43.330880  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:13:43.341008  838391 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:13:43.341063  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:13:43.351160  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:43.361609  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:13:43.371882  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:43.382351  838391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:13:43.391804  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:13:43.401909  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:13:43.413802  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:13:43.424357  838391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:13:43.433387  838391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:13:43.442035  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:43.556953  838391 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:13:43.771383  838391 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:13:43.771487  838391 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:13:43.776031  838391 start.go:563] Will wait 60s for crictl version
	I0917 00:13:43.776089  838391 ssh_runner.go:195] Run: which crictl
	I0917 00:13:43.779581  838391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:13:43.819843  838391 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:13:43.819918  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:43.856395  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:43.887208  838391 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:13:43.888621  838391 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:13:43.889813  838391 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0917 00:13:43.890984  838391 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:13:43.910830  838391 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:13:43.915764  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:43.928519  838391 mustload.go:65] Loading cluster: ha-472903
	I0917 00:13:43.928713  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:43.928903  838391 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:13:43.947488  838391 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:13:43.947756  838391 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.4
	I0917 00:13:43.947768  838391 certs.go:194] generating shared ca certs ...
	I0917 00:13:43.947788  838391 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:43.947924  838391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:13:43.947984  838391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:13:43.947997  838391 certs.go:256] generating profile certs ...
	I0917 00:13:43.948089  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0917 00:13:43.948160  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8
	I0917 00:13:43.948220  838391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0917 00:13:43.948236  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:13:43.948257  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:13:43.948274  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:13:43.948291  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:13:43.948305  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:13:43.948322  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:13:43.948341  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:13:43.948359  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:13:43.948448  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:13:43.948497  838391 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:13:43.948514  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:13:43.948542  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:13:43.948574  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:13:43.948605  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:13:43.948679  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:43.948730  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:43.948750  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0917 00:13:43.948766  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0917 00:13:43.948828  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:43.966378  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:44.054709  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:13:44.058781  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:13:44.071805  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:13:44.075707  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:13:44.088751  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:13:44.092347  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:13:44.104909  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:13:44.108527  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:13:44.121249  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:13:44.124730  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:13:44.137128  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:13:44.140545  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 00:13:44.153313  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:13:44.178995  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:13:44.203321  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:13:44.228724  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:13:44.253672  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:13:44.277964  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:13:44.302441  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:13:44.326350  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:13:44.351539  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:13:44.376666  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:13:44.404677  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:13:44.431366  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:13:44.450278  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:13:44.468513  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:13:44.486743  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:13:44.504987  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:13:44.524143  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 00:13:44.542282  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:13:44.563055  838391 ssh_runner.go:195] Run: openssl version
	I0917 00:13:44.569331  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:13:44.580250  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:44.584080  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:44.584138  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:44.591070  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:13:44.600282  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:13:44.610104  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:13:44.613726  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:13:44.613768  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:13:44.620611  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:13:44.629788  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:13:44.639483  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:13:44.643062  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:13:44.643110  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:13:44.650489  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:13:44.659935  838391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:13:44.663514  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:13:44.669906  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:13:44.676511  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:13:44.682889  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:13:44.689353  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:13:44.695631  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:13:44.702340  838391 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0917 00:13:44.702470  838391 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:13:44.702498  838391 kube-vip.go:115] generating kube-vip config ...
	I0917 00:13:44.702533  838391 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:13:44.715980  838391 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:13:44.716039  838391 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:13:44.716091  838391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:13:44.725480  838391 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:13:44.725529  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:13:44.734323  838391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 00:13:44.753458  838391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:13:44.773199  838391 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:13:44.791551  838391 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:13:44.795163  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:44.806641  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:44.919558  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:44.932561  838391 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:13:44.932786  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:44.934564  838391 out.go:179] * Verifying Kubernetes components...
	I0917 00:13:44.935745  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:45.049795  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:45.064166  838391 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:13:45.064235  838391 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:13:45.064458  838391 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m03" to be "Ready" ...
	I0917 00:13:45.067494  838391 node_ready.go:49] node "ha-472903-m03" is "Ready"
	I0917 00:13:45.067523  838391 node_ready.go:38] duration metric: took 3.046711ms for node "ha-472903-m03" to be "Ready" ...
	I0917 00:13:45.067540  838391 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:13:45.067600  838391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:13:45.078867  838391 api_server.go:72] duration metric: took 146.25055ms to wait for apiserver process to appear ...
	I0917 00:13:45.078891  838391 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:13:45.078908  838391 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:13:45.084241  838391 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:13:45.085084  838391 api_server.go:141] control plane version: v1.34.0
	I0917 00:13:45.085104  838391 api_server.go:131] duration metric: took 6.207355ms to wait for apiserver health ...
	I0917 00:13:45.085112  838391 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:13:45.090968  838391 system_pods.go:59] 24 kube-system pods found
	I0917 00:13:45.091001  838391 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:45.091023  838391 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:45.091035  838391 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0917 00:13:45.091045  838391 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0917 00:13:45.091053  838391 system_pods.go:61] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:45.091060  838391 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:13:45.091064  838391 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0917 00:13:45.091070  838391 system_pods.go:61] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:13:45.091076  838391 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.091088  838391 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.091100  838391 system_pods.go:61] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.091109  838391 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0917 00:13:45.091115  838391 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0917 00:13:45.091127  838391 system_pods.go:61] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:45.091135  838391 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0917 00:13:45.091141  838391 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:13:45.091152  838391 system_pods.go:61] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:13:45.091159  838391 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:45.091164  838391 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0917 00:13:45.091177  838391 system_pods.go:61] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:45.091187  838391 system_pods.go:61] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:13:45.091196  838391 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:13:45.091200  838391 system_pods.go:61] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:13:45.091208  838391 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:13:45.091216  838391 system_pods.go:74] duration metric: took 6.096009ms to wait for pod list to return data ...
	I0917 00:13:45.091227  838391 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:13:45.093796  838391 default_sa.go:45] found service account: "default"
	I0917 00:13:45.093813  838391 default_sa.go:55] duration metric: took 2.577656ms for default service account to be created ...
	I0917 00:13:45.093820  838391 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:13:45.099455  838391 system_pods.go:86] 24 kube-system pods found
	I0917 00:13:45.099490  838391 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:45.099501  838391 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:45.099507  838391 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0917 00:13:45.099511  838391 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0917 00:13:45.099518  838391 system_pods.go:89] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:45.099540  838391 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:13:45.099551  838391 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0917 00:13:45.099556  838391 system_pods.go:89] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:13:45.099563  838391 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.099578  838391 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.099589  838391 system_pods.go:89] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.099596  838391 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0917 00:13:45.099601  838391 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0917 00:13:45.099614  838391 system_pods.go:89] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:45.099624  838391 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0917 00:13:45.099632  838391 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:13:45.099639  838391 system_pods.go:89] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:13:45.099649  838391 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:45.099657  838391 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0917 00:13:45.099665  838391 system_pods.go:89] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:45.099678  838391 system_pods.go:89] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:13:45.099682  838391 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:13:45.099688  838391 system_pods.go:89] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:13:45.099693  838391 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:13:45.099701  838391 system_pods.go:126] duration metric: took 5.874708ms to wait for k8s-apps to be running ...
	I0917 00:13:45.099714  838391 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:13:45.099765  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:13:45.111785  838391 system_svc.go:56] duration metric: took 12.061761ms WaitForService to wait for kubelet
	I0917 00:13:45.111811  838391 kubeadm.go:578] duration metric: took 179.201567ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:13:45.111829  838391 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:13:45.115075  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:45.115095  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:45.115109  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:45.115114  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:45.115118  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:45.115124  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:45.115130  838391 node_conditions.go:105] duration metric: took 3.295987ms to run NodePressure ...
	I0917 00:13:45.115145  838391 start.go:241] waiting for startup goroutines ...
	I0917 00:13:45.115177  838391 start.go:255] writing updated cluster config ...
	I0917 00:13:45.116870  838391 out.go:203] 
	I0917 00:13:45.117967  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:45.118090  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:45.119494  838391 out.go:179] * Starting "ha-472903-m04" worker node in "ha-472903" cluster
	I0917 00:13:45.120460  838391 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:13:45.121518  838391 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:13:45.122495  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:45.122511  838391 cache.go:58] Caching tarball of preloaded images
	I0917 00:13:45.122563  838391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:13:45.122595  838391 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:13:45.122603  838391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:13:45.122694  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:45.143478  838391 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:13:45.143500  838391 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:13:45.143517  838391 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:13:45.143550  838391 start.go:360] acquireMachinesLock for ha-472903-m04: {Name:mkdbbd0d5b3cd7ad4b13d37f2d562d6d6421c5de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:13:45.143618  838391 start.go:364] duration metric: took 45.935µs to acquireMachinesLock for "ha-472903-m04"
	I0917 00:13:45.143643  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:13:45.143650  838391 fix.go:54] fixHost starting: m04
	I0917 00:13:45.143945  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:13:45.161874  838391 fix.go:112] recreateIfNeeded on ha-472903-m04: state=Stopped err=<nil>
	W0917 00:13:45.161907  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:13:45.163684  838391 out.go:252] * Restarting existing docker container for "ha-472903-m04" ...
	I0917 00:13:45.163768  838391 cli_runner.go:164] Run: docker start ha-472903-m04
	I0917 00:13:45.414854  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:13:45.433545  838391 kic.go:430] container "ha-472903-m04" state is running.
	I0917 00:13:45.433944  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m04
	I0917 00:13:45.452344  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:45.452626  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:13:45.452705  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	I0917 00:13:45.471203  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:45.471486  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33589 <nil> <nil>}
	I0917 00:13:45.471509  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:13:45.472182  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55516->127.0.0.1:33589: read: connection reset by peer
	I0917 00:13:48.473360  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:13:51.474441  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:13:54.475694  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:13:57.476729  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:00.477687  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:03.477978  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:06.479736  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:09.480885  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:12.482720  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:15.483800  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:18.484741  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:21.485809  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:24.487156  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:27.488676  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:30.489805  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:33.490276  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:36.491714  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:39.492658  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:42.493967  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:45.494632  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:48.495764  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:51.496767  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:54.497734  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:57.499659  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:00.500675  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:03.501862  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:06.503834  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:09.505079  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:12.507641  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:15.508761  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:18.509736  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:21.510672  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:24.512280  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:27.514552  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:30.515709  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:33.516144  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:36.518405  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:39.519733  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:42.521625  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:45.522451  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:48.523249  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:51.524945  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:54.525931  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:57.527643  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:00.528649  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:03.529267  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:06.531578  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:09.532530  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:12.534632  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:15.537051  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:18.537304  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:21.538664  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:24.539680  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:27.541681  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:30.542852  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:33.543744  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:36.544245  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:39.544518  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:42.546746  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:45.548509  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:16:45.548571  838391 ubuntu.go:182] provisioning hostname "ha-472903-m04"
	I0917 00:16:45.548664  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:45.567482  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:45.567574  838391 machine.go:96] duration metric: took 3m0.114930329s to provisionDockerMachine
	I0917 00:16:45.567666  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:16:45.567704  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:45.586204  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:45.586381  838391 retry.go:31] will retry after 243.120334ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:45.829742  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:45.848018  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:45.848165  838391 retry.go:31] will retry after 204.404017ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:46.053620  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:46.071508  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:46.071648  838391 retry.go:31] will retry after 637.92377ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:46.710530  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:46.728463  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:16:46.728598  838391 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:16:46.728620  838391 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:46.728676  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:16:46.728722  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:46.746202  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:46.746328  838391 retry.go:31] will retry after 328.494131ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:47.075622  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:47.094084  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:47.094205  838391 retry.go:31] will retry after 397.703456ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:47.492843  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:47.511608  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:47.511709  838391 retry.go:31] will retry after 759.296258ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:48.271608  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:48.289666  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:16:48.289812  838391 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:16:48.289830  838391 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:48.289844  838391 fix.go:56] duration metric: took 3m3.146193546s for fixHost
	I0917 00:16:48.289858  838391 start.go:83] releasing machines lock for "ha-472903-m04", held for 3m3.146226948s
	W0917 00:16:48.289881  838391 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:16:48.289975  838391 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:48.289987  838391 start.go:729] Will try again in 5 seconds ...
	I0917 00:16:53.290141  838391 start.go:360] acquireMachinesLock for ha-472903-m04: {Name:mkdbbd0d5b3cd7ad4b13d37f2d562d6d6421c5de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:16:53.290272  838391 start.go:364] duration metric: took 94.983µs to acquireMachinesLock for "ha-472903-m04"
	I0917 00:16:53.290297  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:16:53.290303  838391 fix.go:54] fixHost starting: m04
	I0917 00:16:53.290646  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:16:53.309611  838391 fix.go:112] recreateIfNeeded on ha-472903-m04: state=Stopped err=<nil>
	W0917 00:16:53.309640  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:16:53.311233  838391 out.go:252] * Restarting existing docker container for "ha-472903-m04" ...
	I0917 00:16:53.311300  838391 cli_runner.go:164] Run: docker start ha-472903-m04
	I0917 00:16:53.541222  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:16:53.560095  838391 kic.go:430] container "ha-472903-m04" state is running.
	I0917 00:16:53.560573  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m04
	I0917 00:16:53.580208  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:16:53.580538  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:16:53.580642  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	I0917 00:16:53.599573  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:16:53.599853  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33594 <nil> <nil>}
	I0917 00:16:53.599867  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:16:53.600481  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36824->127.0.0.1:33594: read: connection reset by peer
	I0917 00:16:56.602700  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:16:59.603638  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:02.605644  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:05.607721  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:08.608037  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:11.609632  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:14.610658  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:17.612855  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:20.613697  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:23.614397  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:26.616706  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:29.617175  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:32.618651  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:35.620635  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:38.621502  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:41.622948  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:44.624290  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:47.624933  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:50.625690  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:53.626092  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:56.628195  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:59.629019  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:02.631303  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:05.632822  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:08.633316  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:11.635679  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:14.636798  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:17.638657  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:20.639654  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:23.640721  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:26.642651  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:29.643601  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:32.645639  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:35.647624  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:38.648379  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:41.650676  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:44.651634  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:47.653582  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:50.654648  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:53.655970  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:56.658210  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:59.658941  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:02.661113  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:05.663405  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:08.664478  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:11.666153  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:14.667567  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:17.668447  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:20.668923  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:23.669615  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:26.671877  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:29.673145  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:32.674637  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:35.677064  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:38.678152  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:41.680118  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:44.681450  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:47.682442  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:50.682884  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:53.683789  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:19:53.683836  838391 ubuntu.go:182] provisioning hostname "ha-472903-m04"
	I0917 00:19:53.683924  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:53.702821  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:53.702901  838391 machine.go:96] duration metric: took 3m0.122343923s to provisionDockerMachine
	I0917 00:19:53.702985  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:19:53.703018  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:53.720196  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:53.720349  838391 retry.go:31] will retry after 273.264226ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:53.994608  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:54.012758  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:54.012877  838391 retry.go:31] will retry after 451.557634ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:54.465611  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:54.483957  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:54.484069  838391 retry.go:31] will retry after 372.513327ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:54.857680  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:54.875097  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:19:54.875215  838391 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:19:54.875229  838391 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:54.875274  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:19:54.875305  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:54.892677  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:54.892775  838391 retry.go:31] will retry after 244.26035ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:55.137223  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:55.156010  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:55.156141  838391 retry.go:31] will retry after 195.694179ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:55.352609  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:55.370515  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:55.370623  838391 retry.go:31] will retry after 349.362306ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:55.720142  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:55.737839  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:55.737968  838391 retry.go:31] will retry after 818.87418ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:56.557986  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:56.575881  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:19:56.576024  838391 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:19:56.576041  838391 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:56.576050  838391 fix.go:56] duration metric: took 3m3.285747581s for fixHost
	I0917 00:19:56.576057  838391 start.go:83] releasing machines lock for "ha-472903-m04", held for 3m3.285773333s
	W0917 00:19:56.576146  838391 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-472903" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	* Failed to start docker container. Running "minikube delete -p ha-472903" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:56.578148  838391 out.go:203] 
	W0917 00:19:56.579015  838391 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:19:56.579029  838391 out.go:285] * 
	* 
	W0917 00:19:56.580824  838391 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:19:56.581780  838391 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-472903 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 node list --alsologtostderr -v 5
ha_test.go:481: reported node list is not the same after restart. Before restart: ha-472903	192.168.49.2
ha-472903-m02	192.168.49.3
ha-472903-m03	192.168.49.4
ha-472903-m04	

                                                
                                                
After restart: ha-472903	192.168.49.2
ha-472903-m02	192.168.49.3
ha-472903-m03	192.168.49.4
ha-472903-m04	192.168.49.5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-472903
helpers_test.go:243: (dbg) docker inspect ha-472903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	        "Created": "2025-09-16T23:56:35.178831158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 838588,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:13:23.170247962Z",
	            "FinishedAt": "2025-09-17T00:13:22.548619261Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hostname",
	        "HostsPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hosts",
	        "LogPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047-json.log",
	        "Name": "/ha-472903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-472903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-472903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	                "LowerDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-472903",
	                "Source": "/var/lib/docker/volumes/ha-472903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-472903",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-472903",
	                "name.minikube.sigs.k8s.io": "ha-472903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f681bc3451c2f9b5cdb2156ffcba04f0e713f66cdf73bde32e7115dbf471fa7b",
	            "SandboxKey": "/var/run/docker/netns/f681bc3451c2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33574"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33575"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33578"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33576"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33577"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-472903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:43:7c:dc:22:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "22d49b2f397dfabc2a3967bd54b05204a52976e683f65ff07bff00e793040bef",
	                    "EndpointID": "4140add73c3678ffb48555035c60424ac6e443ed664566963b98cd7acf01832d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-472903",
	                        "05f03528ecc5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-472903 -n ha-472903
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-472903 logs -n 25: (1.471445424s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903-m02:/home/docker/cp-test_ha-472903-m03_ha-472903-m02.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m02 sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m02.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ cp      │ ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903-m04:/home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp testdata/cp-test.txt ha-472903-m04:/home/docker/cp-test.txt                                                             │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970888916/001/cp-test_ha-472903-m04.txt │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903:/home/docker/cp-test_ha-472903-m04_ha-472903.txt                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903.txt                                                 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m02:/home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m02 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m03:/home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ node    │ ha-472903 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ node    │ ha-472903 node start m02 --alsologtostderr -v 5                                                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ node    │ ha-472903 node list --alsologtostderr -v 5                                                                                           │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ stop    │ ha-472903 stop --alsologtostderr -v 5                                                                                                │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:13 UTC │
	│ start   │ ha-472903 start --wait true --alsologtostderr -v 5                                                                                   │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:13 UTC │                     │
	│ node    │ ha-472903 node list --alsologtostderr -v 5                                                                                           │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:19 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:13:22
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:13:22.953197  838391 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:13:22.953530  838391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:13:22.953542  838391 out.go:374] Setting ErrFile to fd 2...
	I0917 00:13:22.953549  838391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:13:22.953766  838391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:13:22.954306  838391 out.go:368] Setting JSON to false
	I0917 00:13:22.955398  838391 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":10545,"bootTime":1758057458,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:13:22.955520  838391 start.go:140] virtualization: kvm guest
	I0917 00:13:22.957510  838391 out.go:179] * [ha-472903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:13:22.958615  838391 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:13:22.958642  838391 notify.go:220] Checking for updates...
	I0917 00:13:22.960507  838391 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:13:22.961674  838391 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:13:22.962866  838391 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0917 00:13:22.964443  838391 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:13:22.965391  838391 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:13:22.966891  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:22.966986  838391 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:13:22.992446  838391 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:13:22.992525  838391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:13:23.045449  838391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:13:23.034509691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:13:23.045556  838391 docker.go:318] overlay module found
	I0917 00:13:23.047016  838391 out.go:179] * Using the docker driver based on existing profile
	I0917 00:13:23.047922  838391 start.go:304] selected driver: docker
	I0917 00:13:23.047937  838391 start.go:918] validating driver "docker" against &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:13:23.048084  838391 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:13:23.048209  838391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:13:23.101147  838391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:13:23.091009521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:13:23.102012  838391 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:13:23.102057  838391 cni.go:84] Creating CNI manager for ""
	I0917 00:13:23.102129  838391 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:13:23.102195  838391 start.go:348] cluster config:
	{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:13:23.103903  838391 out.go:179] * Starting "ha-472903" primary control-plane node in "ha-472903" cluster
	I0917 00:13:23.104759  838391 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:13:23.105814  838391 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:13:23.106795  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:23.106833  838391 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0917 00:13:23.106844  838391 cache.go:58] Caching tarball of preloaded images
	I0917 00:13:23.106881  838391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:13:23.106921  838391 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:13:23.106932  838391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:13:23.107045  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:23.127051  838391 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:13:23.127078  838391 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:13:23.127093  838391 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:13:23.127117  838391 start.go:360] acquireMachinesLock for ha-472903: {Name:mk994658ce3314f2aed1dec341debc49d36a4326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:13:23.127173  838391 start.go:364] duration metric: took 38.444µs to acquireMachinesLock for "ha-472903"
	I0917 00:13:23.127192  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:13:23.127199  838391 fix.go:54] fixHost starting: 
	I0917 00:13:23.127403  838391 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:13:23.144605  838391 fix.go:112] recreateIfNeeded on ha-472903: state=Stopped err=<nil>
	W0917 00:13:23.144651  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:13:23.146403  838391 out.go:252] * Restarting existing docker container for "ha-472903" ...
	I0917 00:13:23.146471  838391 cli_runner.go:164] Run: docker start ha-472903
	I0917 00:13:23.362855  838391 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:13:23.380820  838391 kic.go:430] container "ha-472903" state is running.
	I0917 00:13:23.381209  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:13:23.398851  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:23.399057  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:13:23.399113  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:23.416213  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:23.416490  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I0917 00:13:23.416505  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:13:23.417056  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37384->127.0.0.1:33574: read: connection reset by peer
	I0917 00:13:26.554176  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0917 00:13:26.554202  838391 ubuntu.go:182] provisioning hostname "ha-472903"
	I0917 00:13:26.554275  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:26.572576  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:26.572800  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I0917 00:13:26.572813  838391 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903 && echo "ha-472903" | sudo tee /etc/hostname
	I0917 00:13:26.719562  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0917 00:13:26.719659  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:26.737757  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:26.738008  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I0917 00:13:26.738032  838391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:13:26.872954  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:13:26.872993  838391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:13:26.873020  838391 ubuntu.go:190] setting up certificates
	I0917 00:13:26.873033  838391 provision.go:84] configureAuth start
	I0917 00:13:26.873086  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:13:26.891066  838391 provision.go:143] copyHostCerts
	I0917 00:13:26.891111  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:26.891147  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:13:26.891169  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:26.891262  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:13:26.891384  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:26.891432  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:13:26.891443  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:26.891485  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:13:26.891575  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:26.891600  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:13:26.891610  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:26.891648  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:13:26.891725  838391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903 san=[127.0.0.1 192.168.49.2 ha-472903 localhost minikube]
	I0917 00:13:27.127844  838391 provision.go:177] copyRemoteCerts
	I0917 00:13:27.127908  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:13:27.127972  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.146507  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.243455  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:13:27.243525  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:13:27.269313  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:13:27.269382  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 00:13:27.294966  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:13:27.295048  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:13:27.320815  838391 provision.go:87] duration metric: took 447.761849ms to configureAuth
	I0917 00:13:27.320860  838391 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:13:27.321072  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:27.321085  838391 machine.go:96] duration metric: took 3.922015218s to provisionDockerMachine
	I0917 00:13:27.321092  838391 start.go:293] postStartSetup for "ha-472903" (driver="docker")
	I0917 00:13:27.321102  838391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:13:27.321150  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:13:27.321188  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.339742  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.437715  838391 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:13:27.441468  838391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:13:27.441498  838391 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:13:27.441506  838391 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:13:27.441513  838391 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:13:27.441524  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:13:27.441576  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:13:27.441647  838391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:13:27.441657  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0917 00:13:27.441747  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:13:27.451010  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:27.477190  838391 start.go:296] duration metric: took 156.078591ms for postStartSetup
	I0917 00:13:27.477273  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:13:27.477311  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.495838  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.588631  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:13:27.593367  838391 fix.go:56] duration metric: took 4.46615876s for fixHost
	I0917 00:13:27.593398  838391 start.go:83] releasing machines lock for "ha-472903", held for 4.466212718s
	I0917 00:13:27.593488  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:13:27.611894  838391 ssh_runner.go:195] Run: cat /version.json
	I0917 00:13:27.611963  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.611984  838391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:13:27.612068  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.630790  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.632015  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.723564  838391 ssh_runner.go:195] Run: systemctl --version
	I0917 00:13:27.805571  838391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:13:27.810704  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:13:27.829982  838391 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:13:27.830056  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:13:27.839307  838391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:13:27.839334  838391 start.go:495] detecting cgroup driver to use...
	I0917 00:13:27.839374  838391 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:13:27.839455  838391 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:13:27.853620  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:13:27.866086  838391 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:13:27.866143  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:13:27.879568  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:13:27.891699  838391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:13:27.957039  838391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:13:28.019649  838391 docker.go:234] disabling docker service ...
	I0917 00:13:28.019719  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:13:28.032725  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:13:28.045044  838391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:13:28.110090  838391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:13:28.176290  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:13:28.188485  838391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:13:28.206191  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:13:28.216912  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:13:28.227586  838391 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:13:28.227653  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:13:28.238198  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:28.248607  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:13:28.258883  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:28.269300  838391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:13:28.279692  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:13:28.290638  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:13:28.301524  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:13:28.312695  838391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:13:28.321821  838391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:13:28.331494  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:28.395408  838391 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:13:28.510345  838391 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:13:28.510442  838391 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:13:28.514486  838391 start.go:563] Will wait 60s for crictl version
	I0917 00:13:28.514543  838391 ssh_runner.go:195] Run: which crictl
	I0917 00:13:28.518058  838391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:13:28.553392  838391 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:13:28.553470  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:28.578186  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:28.607037  838391 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:13:28.608343  838391 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:13:28.625981  838391 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:13:28.630074  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:28.642270  838391 kubeadm.go:875] updating cluster {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:13:28.642447  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:28.642500  838391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:13:28.677502  838391 containerd.go:627] all images are preloaded for containerd runtime.
	I0917 00:13:28.677528  838391 containerd.go:534] Images already preloaded, skipping extraction
	I0917 00:13:28.677596  838391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:13:28.711767  838391 containerd.go:627] all images are preloaded for containerd runtime.
	I0917 00:13:28.711790  838391 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:13:28.711799  838391 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0917 00:13:28.711898  838391 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:13:28.711952  838391 ssh_runner.go:195] Run: sudo crictl info
	I0917 00:13:28.748238  838391 cni.go:84] Creating CNI manager for ""
	I0917 00:13:28.748269  838391 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:13:28.748282  838391 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:13:28.748301  838391 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-472903 NodeName:ha-472903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:13:28.748434  838391 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-472903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:13:28.748456  838391 kube-vip.go:115] generating kube-vip config ...
	I0917 00:13:28.748504  838391 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:13:28.761835  838391 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:13:28.761950  838391 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:13:28.762005  838391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:13:28.771377  838391 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:13:28.771466  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:13:28.780815  838391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0917 00:13:28.799673  838391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:13:28.818695  838391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0917 00:13:28.837443  838391 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:13:28.856629  838391 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:13:28.860342  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:28.871978  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:28.937920  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:28.965162  838391 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.2
	I0917 00:13:28.965183  838391 certs.go:194] generating shared ca certs ...
	I0917 00:13:28.965200  838391 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:28.965352  838391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:13:28.965429  838391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:13:28.965446  838391 certs.go:256] generating profile certs ...
	I0917 00:13:28.965567  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0917 00:13:28.965609  838391 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.8acf531c
	I0917 00:13:28.965631  838391 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.8acf531c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0917 00:13:28.981661  838391 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.8acf531c ...
	I0917 00:13:28.981698  838391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.8acf531c: {Name:mkdef0e1cbf73e7227a698510b51d68a698391c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:28.981868  838391 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.8acf531c ...
	I0917 00:13:28.981880  838391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.8acf531c: {Name:mk80b61f5fe8d635199050a211c5a719c4b8f9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:28.981959  838391 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.8acf531c -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0917 00:13:28.982123  838391 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.8acf531c -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0917 00:13:28.982267  838391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0917 00:13:28.982283  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:13:28.982296  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:13:28.982309  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:13:28.982327  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:13:28.982340  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:13:28.982352  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:13:28.982367  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:13:28.982379  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:13:28.982446  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:13:28.982481  838391 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:13:28.982491  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:13:28.982517  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:13:28.982539  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:13:28.982559  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:13:28.982598  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:28.982624  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0917 00:13:28.982638  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0917 00:13:28.982650  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:28.983259  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:13:29.011855  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:13:29.044116  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:13:29.076632  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:13:29.102081  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:13:29.127618  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:13:29.154054  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:13:29.181152  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:13:29.207152  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:13:29.234803  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:13:29.261065  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:13:29.285817  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:13:29.304802  838391 ssh_runner.go:195] Run: openssl version
	I0917 00:13:29.310548  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:13:29.321280  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:13:29.325168  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:13:29.325220  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:13:29.332550  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:13:29.342450  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:13:29.352677  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:29.356484  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:29.356557  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:29.363671  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:13:29.373502  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:13:29.383350  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:13:29.386969  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:13:29.387020  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:13:29.393845  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:13:29.402996  838391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:13:29.406679  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:13:29.413276  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:13:29.420039  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:13:29.426813  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:13:29.433710  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:13:29.440812  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:13:29.447756  838391 kubeadm.go:392] StartCluster: {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:13:29.447896  838391 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0917 00:13:29.447983  838391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:13:29.484343  838391 cri.go:89] found id: "5a5a17cca6c0a643b6c0881dab5508dcb7de8e6ad77d7e6ecb81d434ab2cc8a1"
	I0917 00:13:29.484364  838391 cri.go:89] found id: "8683544e2a9d579448e28b8f33653e2c8d1315b2d07bd7b4ce574428d93c6f3a"
	I0917 00:13:29.484368  838391 cri.go:89] found id: "9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315"
	I0917 00:13:29.484373  838391 cri.go:89] found id: "3b457407f10e357ce33da7fa3fb4333f8312f0d3e3570cf8528cdcac8f5a1d0f"
	I0917 00:13:29.484376  838391 cri.go:89] found id: "cc69d2451cb65860b5bc78e027be2fc1cb0f9fa6542b4abe3bc1ff1c90a8fe60"
	I0917 00:13:29.484379  838391 cri.go:89] found id: "92dd4d116eb0387dded82fb32d35690ec2d00e3f5e7ac81bf7aea0c6814edd5e"
	I0917 00:13:29.484382  838391 cri.go:89] found id: "bba28cace6502de93aa43db4fb51671581c5074990dea721d98d36d839734a67"
	I0917 00:13:29.484384  838391 cri.go:89] found id: "087290a41f59caa4f9bc89759bcec6cf90f47c8a2ab83b7c671a8fff35641df9"
	I0917 00:13:29.484387  838391 cri.go:89] found id: "0aba62132d764965d8e1a80a4a6345bb7e34892b23143da4a7af3450cd465d6c"
	I0917 00:13:29.484395  838391 cri.go:89] found id: "23c0af0bdbe9526d53769461ed9f80d8c743b02e625b65cce39c888f5e7d4b4e"
	I0917 00:13:29.484398  838391 cri.go:89] found id: ""
	I0917 00:13:29.484470  838391 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0917 00:13:29.498073  838391 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T00:13:29Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0917 00:13:29.498177  838391 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:13:29.508791  838391 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:13:29.508813  838391 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:13:29.508861  838391 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:13:29.519962  838391 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:13:29.520528  838391 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-472903" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:13:29.520700  838391 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-749120/kubeconfig needs updating (will repair): [kubeconfig missing "ha-472903" cluster setting kubeconfig missing "ha-472903" context setting]
	I0917 00:13:29.521229  838391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:29.521963  838391 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:13:29.522552  838391 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:13:29.522579  838391 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:13:29.522586  838391 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:13:29.522592  838391 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:13:29.522598  838391 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:13:29.522631  838391 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:13:29.523130  838391 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:13:29.536212  838391 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:13:29.536248  838391 kubeadm.go:593] duration metric: took 27.419363ms to restartPrimaryControlPlane
	I0917 00:13:29.536260  838391 kubeadm.go:394] duration metric: took 88.513961ms to StartCluster
	I0917 00:13:29.536281  838391 settings.go:142] acquiring lock: {Name:mk6c1a5bee23e141aad5180323c16c47ed580ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:29.536352  838391 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:13:29.537180  838391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:29.537465  838391 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:13:29.537498  838391 start.go:241] waiting for startup goroutines ...
	I0917 00:13:29.537509  838391 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:13:29.537779  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:29.539896  838391 out.go:179] * Enabled addons: 
	I0917 00:13:29.541345  838391 addons.go:514] duration metric: took 3.828487ms for enable addons: enabled=[]
	I0917 00:13:29.541404  838391 start.go:246] waiting for cluster config update ...
	I0917 00:13:29.541459  838391 start.go:255] writing updated cluster config ...
	I0917 00:13:29.543184  838391 out.go:203] 
	I0917 00:13:29.548360  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:29.548520  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:29.550284  838391 out.go:179] * Starting "ha-472903-m02" control-plane node in "ha-472903" cluster
	I0917 00:13:29.551514  838391 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:13:29.552445  838391 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:13:29.554184  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:29.554221  838391 cache.go:58] Caching tarball of preloaded images
	I0917 00:13:29.554326  838391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:13:29.554361  838391 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:13:29.554376  838391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:13:29.554541  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:29.581238  838391 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:13:29.581265  838391 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:13:29.581286  838391 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:13:29.581322  838391 start.go:360] acquireMachinesLock for ha-472903-m02: {Name:mk81d8c73856cf84ceff1767a1681f3f3cdab773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:13:29.581402  838391 start.go:364] duration metric: took 53.081µs to acquireMachinesLock for "ha-472903-m02"
	I0917 00:13:29.581447  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:13:29.581461  838391 fix.go:54] fixHost starting: m02
	I0917 00:13:29.581795  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:13:29.604878  838391 fix.go:112] recreateIfNeeded on ha-472903-m02: state=Stopped err=<nil>
	W0917 00:13:29.604915  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:13:29.607517  838391 out.go:252] * Restarting existing docker container for "ha-472903-m02" ...
	I0917 00:13:29.607600  838391 cli_runner.go:164] Run: docker start ha-472903-m02
	I0917 00:13:29.911119  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:13:29.930731  838391 kic.go:430] container "ha-472903-m02" state is running.
	I0917 00:13:29.931116  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:13:29.951026  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:29.951305  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:13:29.951370  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:29.974010  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:29.974330  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I0917 00:13:29.974348  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:13:29.975092  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37012->127.0.0.1:33579: read: connection reset by peer
	I0917 00:13:33.111351  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0917 00:13:33.111379  838391 ubuntu.go:182] provisioning hostname "ha-472903-m02"
	I0917 00:13:33.111466  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:33.129914  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:33.130125  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I0917 00:13:33.130138  838391 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m02 && echo "ha-472903-m02" | sudo tee /etc/hostname
	I0917 00:13:33.276390  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0917 00:13:33.276473  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:33.295322  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:33.295578  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I0917 00:13:33.295626  838391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:13:33.430221  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:13:33.430255  838391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:13:33.430276  838391 ubuntu.go:190] setting up certificates
	I0917 00:13:33.430293  838391 provision.go:84] configureAuth start
	I0917 00:13:33.430347  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:13:33.447859  838391 provision.go:143] copyHostCerts
	I0917 00:13:33.447896  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:33.447924  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:13:33.447931  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:33.447997  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:13:33.448082  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:33.448101  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:13:33.448105  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:33.448129  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:13:33.448171  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:33.448188  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:13:33.448194  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:33.448221  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:13:33.448284  838391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m02 san=[127.0.0.1 192.168.49.3 ha-472903-m02 localhost minikube]
	I0917 00:13:33.772202  838391 provision.go:177] copyRemoteCerts
	I0917 00:13:33.772271  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:13:33.772308  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:33.790580  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:33.888743  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:13:33.888811  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:13:33.915641  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:13:33.915714  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:13:33.947505  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:13:33.947576  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:13:33.982626  838391 provision.go:87] duration metric: took 552.315533ms to configureAuth
	I0917 00:13:33.982666  838391 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:13:33.983009  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:33.983035  838391 machine.go:96] duration metric: took 4.031716501s to provisionDockerMachine
	I0917 00:13:33.983048  838391 start.go:293] postStartSetup for "ha-472903-m02" (driver="docker")
	I0917 00:13:33.983079  838391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:13:33.983149  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:13:33.983189  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:34.006390  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:34.114836  838391 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:13:34.122569  838391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:13:34.122609  838391 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:13:34.122622  838391 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:13:34.122631  838391 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:13:34.122648  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:13:34.122715  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:13:34.122819  838391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:13:34.122842  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0917 00:13:34.122963  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:13:34.133119  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:34.163792  838391 start.go:296] duration metric: took 180.726136ms for postStartSetup
	I0917 00:13:34.163881  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:13:34.163931  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:34.187017  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:34.289000  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:13:34.295122  838391 fix.go:56] duration metric: took 4.713651457s for fixHost
	I0917 00:13:34.295149  838391 start.go:83] releasing machines lock for "ha-472903-m02", held for 4.713713361s
	I0917 00:13:34.295238  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:13:34.323055  838391 out.go:179] * Found network options:
	I0917 00:13:34.324886  838391 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:13:34.326740  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:13:34.326797  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:13:34.326881  838391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:13:34.326949  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:34.327068  838391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:13:34.327142  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:34.349495  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:34.351023  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:34.450454  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:13:34.547618  838391 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:13:34.547706  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:13:34.558822  838391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:13:34.558854  838391 start.go:495] detecting cgroup driver to use...
	I0917 00:13:34.558889  838391 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:13:34.558939  838391 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:13:34.584135  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:13:34.599048  838391 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:13:34.599118  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:13:34.615043  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:13:34.627813  838391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:13:34.751575  838391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:13:34.913336  838391 docker.go:234] disabling docker service ...
	I0917 00:13:34.913429  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:13:34.943843  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:13:34.964995  838391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:13:35.154858  838391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:13:35.276803  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:13:35.292337  838391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:13:35.312501  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:13:35.325061  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:13:35.337094  838391 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:13:35.337162  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:13:35.349635  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:35.361644  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:13:35.373144  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:35.385968  838391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:13:35.397684  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:13:35.409662  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:13:35.422089  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:13:35.433950  838391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:13:35.445355  838391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:13:35.456096  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:35.554404  838391 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:13:35.775103  838391 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:13:35.775175  838391 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:13:35.780034  838391 start.go:563] Will wait 60s for crictl version
	I0917 00:13:35.780106  838391 ssh_runner.go:195] Run: which crictl
	I0917 00:13:35.784109  838391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:13:35.826151  838391 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:13:35.826224  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:35.852960  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:35.877876  838391 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:13:35.879103  838391 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:13:35.880100  838391 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:13:35.897195  838391 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:13:35.901082  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:35.912748  838391 mustload.go:65] Loading cluster: ha-472903
	I0917 00:13:35.912967  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:35.913168  838391 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:13:35.931969  838391 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:13:35.932217  838391 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.3
	I0917 00:13:35.932230  838391 certs.go:194] generating shared ca certs ...
	I0917 00:13:35.932244  838391 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:35.932358  838391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:13:35.932394  838391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:13:35.932404  838391 certs.go:256] generating profile certs ...
	I0917 00:13:35.932495  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0917 00:13:35.932546  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.b92722b6
	I0917 00:13:35.932585  838391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0917 00:13:35.932596  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:13:35.932607  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:13:35.932619  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:13:35.932630  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:13:35.932643  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:13:35.932656  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:13:35.932668  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:13:35.932681  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:13:35.932726  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:13:35.932752  838391 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:13:35.932761  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:13:35.932781  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:13:35.932801  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:13:35.932822  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:13:35.932861  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:35.932888  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0917 00:13:35.932902  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0917 00:13:35.932914  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:35.932957  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:35.950361  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:36.038689  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:13:36.046320  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:13:36.065517  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:13:36.070746  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:13:36.088267  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:13:36.093060  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:13:36.109798  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:13:36.114630  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:13:36.132250  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:13:36.137979  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:13:36.158118  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:13:36.163359  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 00:13:36.183892  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:13:36.221052  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:13:36.260302  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:13:36.294497  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:13:36.328388  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:13:36.364809  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:13:36.406406  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:13:36.458823  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:13:36.524795  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:13:36.572655  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:13:36.619864  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:13:36.672387  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:13:36.709674  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:13:36.746751  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:13:36.783161  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:13:36.813099  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:13:36.837070  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 00:13:36.858764  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:13:36.877818  838391 ssh_runner.go:195] Run: openssl version
	I0917 00:13:36.883443  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:13:36.894826  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:13:36.899068  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:13:36.899146  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:13:36.907246  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:13:36.916910  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:13:36.927032  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:36.930914  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:36.930968  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:36.940300  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:13:36.953573  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:13:36.967306  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:13:36.971796  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:13:36.971852  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:13:36.981091  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:13:36.991490  838391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:13:36.995167  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:13:37.003067  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:13:37.009863  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:13:37.016575  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:13:37.023485  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:13:37.032694  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:13:37.042763  838391 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0917 00:13:37.042877  838391 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:13:37.042911  838391 kube-vip.go:115] generating kube-vip config ...
	I0917 00:13:37.042948  838391 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:13:37.060530  838391 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:13:37.060601  838391 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:13:37.060658  838391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:13:37.072293  838391 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:13:37.072371  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:13:37.084220  838391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 00:13:37.109777  838391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:13:37.137135  838391 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:13:37.165385  838391 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:13:37.170106  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:37.186447  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:37.337215  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:37.351480  838391 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:13:37.351795  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:37.353499  838391 out.go:179] * Verifying Kubernetes components...
	I0917 00:13:37.354663  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:37.476140  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:37.492755  838391 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:13:37.492840  838391 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:13:37.493129  838391 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m02" to be "Ready" ...
	I0917 00:13:37.501768  838391 node_ready.go:49] node "ha-472903-m02" is "Ready"
	I0917 00:13:37.501795  838391 node_ready.go:38] duration metric: took 8.646756ms for node "ha-472903-m02" to be "Ready" ...
	I0917 00:13:37.501810  838391 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:13:37.501850  838391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:13:37.513878  838391 api_server.go:72] duration metric: took 162.352734ms to wait for apiserver process to appear ...
	I0917 00:13:37.513902  838391 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:13:37.513995  838391 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:13:37.519494  838391 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:13:37.520502  838391 api_server.go:141] control plane version: v1.34.0
	I0917 00:13:37.520525  838391 api_server.go:131] duration metric: took 6.615829ms to wait for apiserver health ...
	I0917 00:13:37.520533  838391 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:13:37.529003  838391 system_pods.go:59] 24 kube-system pods found
	I0917 00:13:37.529040  838391 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:37.529049  838391 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:37.529058  838391 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:37.529064  838391 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:37.529068  838391 system_pods.go:61] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running
	I0917 00:13:37.529072  838391 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:13:37.529075  838391 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0917 00:13:37.529078  838391 system_pods.go:61] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:13:37.529083  838391 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:37.529092  838391 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:37.529096  838391 system_pods.go:61] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running
	I0917 00:13:37.529102  838391 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:37.529110  838391 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:37.529113  838391 system_pods.go:61] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running
	I0917 00:13:37.529118  838391 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0917 00:13:37.529121  838391 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:13:37.529125  838391 system_pods.go:61] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:13:37.529131  838391 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:37.529136  838391 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:37.529144  838391 system_pods.go:61] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running
	I0917 00:13:37.529147  838391 system_pods.go:61] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:13:37.529150  838391 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:13:37.529153  838391 system_pods.go:61] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:13:37.529156  838391 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:13:37.529161  838391 system_pods.go:74] duration metric: took 8.623694ms to wait for pod list to return data ...
	I0917 00:13:37.529167  838391 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:13:37.531877  838391 default_sa.go:45] found service account: "default"
	I0917 00:13:37.531901  838391 default_sa.go:55] duration metric: took 2.728819ms for default service account to be created ...
	I0917 00:13:37.531910  838391 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:13:37.538254  838391 system_pods.go:86] 24 kube-system pods found
	I0917 00:13:37.538287  838391 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:37.538298  838391 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:37.538308  838391 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:37.538315  838391 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:37.538321  838391 system_pods.go:89] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running
	I0917 00:13:37.538327  838391 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:13:37.538333  838391 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0917 00:13:37.538340  838391 system_pods.go:89] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:13:37.538353  838391 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:37.538366  838391 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:37.538373  838391 system_pods.go:89] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running
	I0917 00:13:37.538383  838391 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:37.538396  838391 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:37.538406  838391 system_pods.go:89] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running
	I0917 00:13:37.538447  838391 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0917 00:13:37.538457  838391 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:13:37.538465  838391 system_pods.go:89] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:13:37.538479  838391 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:37.538492  838391 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:37.538504  838391 system_pods.go:89] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running
	I0917 00:13:37.538511  838391 system_pods.go:89] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:13:37.538517  838391 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:13:37.538523  838391 system_pods.go:89] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:13:37.538528  838391 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:13:37.538538  838391 system_pods.go:126] duration metric: took 6.620318ms to wait for k8s-apps to be running ...
	I0917 00:13:37.538550  838391 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:13:37.538595  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:13:37.551380  838391 system_svc.go:56] duration metric: took 12.817524ms WaitForService to wait for kubelet
	I0917 00:13:37.551421  838391 kubeadm.go:578] duration metric: took 199.889741ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:13:37.551446  838391 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:13:37.554601  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:37.554630  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:37.554646  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:37.554651  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:37.554657  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:37.554661  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:37.554667  838391 node_conditions.go:105] duration metric: took 3.21568ms to run NodePressure ...
	I0917 00:13:37.554682  838391 start.go:241] waiting for startup goroutines ...
	I0917 00:13:37.554713  838391 start.go:255] writing updated cluster config ...
	I0917 00:13:37.556785  838391 out.go:203] 
	I0917 00:13:37.558118  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:37.558205  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:37.560287  838391 out.go:179] * Starting "ha-472903-m03" control-plane node in "ha-472903" cluster
	I0917 00:13:37.561674  838391 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:13:37.562756  838391 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:13:37.563720  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:37.563746  838391 cache.go:58] Caching tarball of preloaded images
	I0917 00:13:37.563814  838391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:13:37.563852  838391 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:13:37.563866  838391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:13:37.563958  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:37.584605  838391 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:13:37.584624  838391 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:13:37.584638  838391 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:13:37.584670  838391 start.go:360] acquireMachinesLock for ha-472903-m03: {Name:mk61000bb8e4699ca3310a7fc257e30a156b69de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:13:37.584735  838391 start.go:364] duration metric: took 44.453µs to acquireMachinesLock for "ha-472903-m03"
	I0917 00:13:37.584761  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:13:37.584768  838391 fix.go:54] fixHost starting: m03
	I0917 00:13:37.585018  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:13:37.604118  838391 fix.go:112] recreateIfNeeded on ha-472903-m03: state=Stopped err=<nil>
	W0917 00:13:37.604141  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:13:37.606555  838391 out.go:252] * Restarting existing docker container for "ha-472903-m03" ...
	I0917 00:13:37.606618  838391 cli_runner.go:164] Run: docker start ha-472903-m03
	I0917 00:13:37.854742  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:13:37.873167  838391 kic.go:430] container "ha-472903-m03" state is running.
	I0917 00:13:37.873554  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:13:37.894030  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:37.894294  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:13:37.894371  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:37.912571  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:37.912785  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33584 <nil> <nil>}
	I0917 00:13:37.912796  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:13:37.913480  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50250->127.0.0.1:33584: read: connection reset by peer
	I0917 00:13:41.078339  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0917 00:13:41.078371  838391 ubuntu.go:182] provisioning hostname "ha-472903-m03"
	I0917 00:13:41.078468  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:41.099623  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:41.099906  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33584 <nil> <nil>}
	I0917 00:13:41.099929  838391 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m03 && echo "ha-472903-m03" | sudo tee /etc/hostname
	I0917 00:13:41.256611  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0917 00:13:41.256681  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:41.275951  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:41.276266  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33584 <nil> <nil>}
	I0917 00:13:41.276291  838391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:13:41.413177  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:13:41.413213  838391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:13:41.413235  838391 ubuntu.go:190] setting up certificates
	I0917 00:13:41.413252  838391 provision.go:84] configureAuth start
	I0917 00:13:41.413326  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:13:41.432242  838391 provision.go:143] copyHostCerts
	I0917 00:13:41.432284  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:41.432323  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:13:41.432334  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:41.432427  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:13:41.432522  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:41.432547  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:13:41.432556  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:41.432591  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:13:41.432652  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:41.432676  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:13:41.432684  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:41.432717  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:13:41.432785  838391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m03 san=[127.0.0.1 192.168.49.4 ha-472903-m03 localhost minikube]
	I0917 00:13:41.862573  838391 provision.go:177] copyRemoteCerts
	I0917 00:13:41.862629  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:13:41.862665  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:41.885400  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:41.994335  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:13:41.994423  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:13:42.028538  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:13:42.028607  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:13:42.067649  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:13:42.067726  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:13:42.099602  838391 provision.go:87] duration metric: took 686.33067ms to configureAuth
	I0917 00:13:42.099636  838391 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:13:42.099920  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:42.099938  838391 machine.go:96] duration metric: took 4.205627363s to provisionDockerMachine
	I0917 00:13:42.099950  838391 start.go:293] postStartSetup for "ha-472903-m03" (driver="docker")
	I0917 00:13:42.099962  838391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:13:42.100117  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:13:42.100183  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:42.122141  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:42.233836  838391 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:13:42.238854  838391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:13:42.238889  838391 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:13:42.238900  838391 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:13:42.238908  838391 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:13:42.238924  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:13:42.238985  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:13:42.239080  838391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:13:42.239088  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0917 00:13:42.239207  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:13:42.256636  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:42.284884  838391 start.go:296] duration metric: took 184.914637ms for postStartSetup
	I0917 00:13:42.284980  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:13:42.285038  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:42.306309  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:42.403953  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:13:42.409407  838391 fix.go:56] duration metric: took 4.824632112s for fixHost
	I0917 00:13:42.409462  838391 start.go:83] releasing machines lock for "ha-472903-m03", held for 4.824710137s
	I0917 00:13:42.409541  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:13:42.432198  838391 out.go:179] * Found network options:
	I0917 00:13:42.433393  838391 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0917 00:13:42.434713  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:13:42.434749  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:13:42.434778  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:13:42.434796  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:13:42.434873  838391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:13:42.434927  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:42.434964  838391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:13:42.435037  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:42.456445  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:42.457637  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:42.649452  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:13:42.669255  838391 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:13:42.669336  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:13:42.678466  838391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:13:42.678490  838391 start.go:495] detecting cgroup driver to use...
	I0917 00:13:42.678537  838391 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:13:42.678593  838391 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:13:42.694034  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:13:42.706095  838391 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:13:42.706148  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:13:42.720214  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:13:42.731568  838391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:13:42.844067  838391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:13:42.990517  838391 docker.go:234] disabling docker service ...
	I0917 00:13:42.990597  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:13:43.009784  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:13:43.025954  838391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:13:43.175561  838391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:13:43.288802  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:13:43.302127  838391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:13:43.320551  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:13:43.330880  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:13:43.341008  838391 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:13:43.341063  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:13:43.351160  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:43.361609  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:13:43.371882  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:43.382351  838391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:13:43.391804  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:13:43.401909  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:13:43.413802  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:13:43.424357  838391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:13:43.433387  838391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:13:43.442035  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:43.556953  838391 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:13:43.771383  838391 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:13:43.771487  838391 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:13:43.776031  838391 start.go:563] Will wait 60s for crictl version
	I0917 00:13:43.776089  838391 ssh_runner.go:195] Run: which crictl
	I0917 00:13:43.779581  838391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:13:43.819843  838391 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:13:43.819918  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:43.856395  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:43.887208  838391 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:13:43.888621  838391 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:13:43.889813  838391 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0917 00:13:43.890984  838391 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:13:43.910830  838391 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:13:43.915764  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:43.928519  838391 mustload.go:65] Loading cluster: ha-472903
	I0917 00:13:43.928713  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:43.928903  838391 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:13:43.947488  838391 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:13:43.947756  838391 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.4
	I0917 00:13:43.947768  838391 certs.go:194] generating shared ca certs ...
	I0917 00:13:43.947788  838391 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:43.947924  838391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:13:43.947984  838391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:13:43.947997  838391 certs.go:256] generating profile certs ...
	I0917 00:13:43.948089  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0917 00:13:43.948160  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8
	I0917 00:13:43.948220  838391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0917 00:13:43.948236  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:13:43.948257  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:13:43.948274  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:13:43.948291  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:13:43.948305  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:13:43.948322  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:13:43.948341  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:13:43.948359  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:13:43.948448  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:13:43.948497  838391 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:13:43.948514  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:13:43.948542  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:13:43.948574  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:13:43.948605  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:13:43.948679  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:43.948730  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:43.948750  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0917 00:13:43.948766  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0917 00:13:43.948828  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:43.966378  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:44.054709  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:13:44.058781  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:13:44.071805  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:13:44.075707  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:13:44.088751  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:13:44.092347  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:13:44.104909  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:13:44.108527  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:13:44.121249  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:13:44.124730  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:13:44.137128  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:13:44.140545  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 00:13:44.153313  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:13:44.178995  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:13:44.203321  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:13:44.228724  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:13:44.253672  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:13:44.277964  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:13:44.302441  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:13:44.326350  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:13:44.351539  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:13:44.376666  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:13:44.404677  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:13:44.431366  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:13:44.450278  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:13:44.468513  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:13:44.486743  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:13:44.504987  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:13:44.524143  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 00:13:44.542282  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:13:44.563055  838391 ssh_runner.go:195] Run: openssl version
	I0917 00:13:44.569331  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:13:44.580250  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:44.584080  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:44.584138  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:44.591070  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:13:44.600282  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:13:44.610104  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:13:44.613726  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:13:44.613768  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:13:44.620611  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:13:44.629788  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:13:44.639483  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:13:44.643062  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:13:44.643110  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:13:44.650489  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:13:44.659935  838391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:13:44.663514  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:13:44.669906  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:13:44.676511  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:13:44.682889  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:13:44.689353  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:13:44.695631  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:13:44.702340  838391 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0917 00:13:44.702470  838391 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:13:44.702498  838391 kube-vip.go:115] generating kube-vip config ...
	I0917 00:13:44.702533  838391 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:13:44.715980  838391 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:13:44.716039  838391 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:13:44.716091  838391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:13:44.725480  838391 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:13:44.725529  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:13:44.734323  838391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 00:13:44.753458  838391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:13:44.773199  838391 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:13:44.791551  838391 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:13:44.795163  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:44.806641  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:44.919558  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:44.932561  838391 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:13:44.932786  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:44.934564  838391 out.go:179] * Verifying Kubernetes components...
	I0917 00:13:44.935745  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:45.049795  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:45.064166  838391 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:13:45.064235  838391 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:13:45.064458  838391 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m03" to be "Ready" ...
	I0917 00:13:45.067494  838391 node_ready.go:49] node "ha-472903-m03" is "Ready"
	I0917 00:13:45.067523  838391 node_ready.go:38] duration metric: took 3.046711ms for node "ha-472903-m03" to be "Ready" ...
	I0917 00:13:45.067540  838391 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:13:45.067600  838391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:13:45.078867  838391 api_server.go:72] duration metric: took 146.25055ms to wait for apiserver process to appear ...
	I0917 00:13:45.078891  838391 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:13:45.078908  838391 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:13:45.084241  838391 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:13:45.085084  838391 api_server.go:141] control plane version: v1.34.0
	I0917 00:13:45.085104  838391 api_server.go:131] duration metric: took 6.207355ms to wait for apiserver health ...
	I0917 00:13:45.085112  838391 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:13:45.090968  838391 system_pods.go:59] 24 kube-system pods found
	I0917 00:13:45.091001  838391 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:45.091023  838391 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:45.091035  838391 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0917 00:13:45.091045  838391 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0917 00:13:45.091053  838391 system_pods.go:61] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:45.091060  838391 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:13:45.091064  838391 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0917 00:13:45.091070  838391 system_pods.go:61] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:13:45.091076  838391 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.091088  838391 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.091100  838391 system_pods.go:61] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.091109  838391 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0917 00:13:45.091115  838391 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0917 00:13:45.091127  838391 system_pods.go:61] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:45.091135  838391 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0917 00:13:45.091141  838391 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:13:45.091152  838391 system_pods.go:61] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:13:45.091159  838391 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:45.091164  838391 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0917 00:13:45.091177  838391 system_pods.go:61] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:45.091187  838391 system_pods.go:61] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:13:45.091196  838391 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:13:45.091200  838391 system_pods.go:61] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:13:45.091208  838391 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:13:45.091216  838391 system_pods.go:74] duration metric: took 6.096009ms to wait for pod list to return data ...
	I0917 00:13:45.091227  838391 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:13:45.093796  838391 default_sa.go:45] found service account: "default"
	I0917 00:13:45.093813  838391 default_sa.go:55] duration metric: took 2.577656ms for default service account to be created ...
	I0917 00:13:45.093820  838391 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:13:45.099455  838391 system_pods.go:86] 24 kube-system pods found
	I0917 00:13:45.099490  838391 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:45.099501  838391 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:45.099507  838391 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0917 00:13:45.099511  838391 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0917 00:13:45.099518  838391 system_pods.go:89] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:45.099540  838391 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:13:45.099551  838391 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0917 00:13:45.099556  838391 system_pods.go:89] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:13:45.099563  838391 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.099578  838391 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.099589  838391 system_pods.go:89] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.099596  838391 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0917 00:13:45.099601  838391 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0917 00:13:45.099614  838391 system_pods.go:89] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:45.099624  838391 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0917 00:13:45.099632  838391 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:13:45.099639  838391 system_pods.go:89] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:13:45.099649  838391 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:45.099657  838391 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0917 00:13:45.099665  838391 system_pods.go:89] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:45.099678  838391 system_pods.go:89] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:13:45.099682  838391 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:13:45.099688  838391 system_pods.go:89] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:13:45.099693  838391 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:13:45.099701  838391 system_pods.go:126] duration metric: took 5.874708ms to wait for k8s-apps to be running ...
	I0917 00:13:45.099714  838391 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:13:45.099765  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:13:45.111785  838391 system_svc.go:56] duration metric: took 12.061761ms WaitForService to wait for kubelet
	I0917 00:13:45.111811  838391 kubeadm.go:578] duration metric: took 179.201567ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:13:45.111829  838391 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:13:45.115075  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:45.115095  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:45.115109  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:45.115114  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:45.115118  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:45.115124  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:45.115130  838391 node_conditions.go:105] duration metric: took 3.295987ms to run NodePressure ...
	I0917 00:13:45.115145  838391 start.go:241] waiting for startup goroutines ...
	I0917 00:13:45.115177  838391 start.go:255] writing updated cluster config ...
	I0917 00:13:45.116870  838391 out.go:203] 
	I0917 00:13:45.117967  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:45.118090  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:45.119494  838391 out.go:179] * Starting "ha-472903-m04" worker node in "ha-472903" cluster
	I0917 00:13:45.120460  838391 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:13:45.121518  838391 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:13:45.122495  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:45.122511  838391 cache.go:58] Caching tarball of preloaded images
	I0917 00:13:45.122563  838391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:13:45.122595  838391 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:13:45.122603  838391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:13:45.122694  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:45.143478  838391 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:13:45.143500  838391 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:13:45.143517  838391 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:13:45.143550  838391 start.go:360] acquireMachinesLock for ha-472903-m04: {Name:mkdbbd0d5b3cd7ad4b13d37f2d562d6d6421c5de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:13:45.143618  838391 start.go:364] duration metric: took 45.935µs to acquireMachinesLock for "ha-472903-m04"
	I0917 00:13:45.143643  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:13:45.143650  838391 fix.go:54] fixHost starting: m04
	I0917 00:13:45.143945  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:13:45.161874  838391 fix.go:112] recreateIfNeeded on ha-472903-m04: state=Stopped err=<nil>
	W0917 00:13:45.161907  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:13:45.163684  838391 out.go:252] * Restarting existing docker container for "ha-472903-m04" ...
	I0917 00:13:45.163768  838391 cli_runner.go:164] Run: docker start ha-472903-m04
	I0917 00:13:45.414854  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:13:45.433545  838391 kic.go:430] container "ha-472903-m04" state is running.
	I0917 00:13:45.433944  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m04
	I0917 00:13:45.452344  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:45.452626  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:13:45.452705  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	I0917 00:13:45.471203  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:45.471486  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33589 <nil> <nil>}
	I0917 00:13:45.471509  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:13:45.472182  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55516->127.0.0.1:33589: read: connection reset by peer
	I0917 00:13:48.473360  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:13:51.474441  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:13:54.475694  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:13:57.476729  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:00.477687  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:03.477978  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:06.479736  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:09.480885  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:12.482720  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:15.483800  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:18.484741  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:21.485809  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:24.487156  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:27.488676  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:30.489805  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:33.490276  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:36.491714  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:39.492658  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:42.493967  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:45.494632  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:48.495764  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:51.496767  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:54.497734  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:57.499659  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:00.500675  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:03.501862  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:06.503834  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:09.505079  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:12.507641  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:15.508761  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:18.509736  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:21.510672  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:24.512280  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:27.514552  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:30.515709  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:33.516144  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:36.518405  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:39.519733  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:42.521625  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:45.522451  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:48.523249  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:51.524945  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:54.525931  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:57.527643  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:00.528649  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:03.529267  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:06.531578  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:09.532530  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:12.534632  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:15.537051  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:18.537304  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:21.538664  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:24.539680  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:27.541681  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:30.542852  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:33.543744  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:36.544245  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:39.544518  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:42.546746  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:45.548509  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:16:45.548571  838391 ubuntu.go:182] provisioning hostname "ha-472903-m04"
	I0917 00:16:45.548664  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:45.567482  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:45.567574  838391 machine.go:96] duration metric: took 3m0.114930329s to provisionDockerMachine
	I0917 00:16:45.567666  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:16:45.567704  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:45.586204  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:45.586381  838391 retry.go:31] will retry after 243.120334ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:45.829742  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:45.848018  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:45.848165  838391 retry.go:31] will retry after 204.404017ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:46.053620  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:46.071508  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:46.071648  838391 retry.go:31] will retry after 637.92377ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:46.710530  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:46.728463  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:16:46.728598  838391 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:16:46.728620  838391 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:46.728676  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:16:46.728722  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:46.746202  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:46.746328  838391 retry.go:31] will retry after 328.494131ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:47.075622  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:47.094084  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:47.094205  838391 retry.go:31] will retry after 397.703456ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:47.492843  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:47.511608  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:47.511709  838391 retry.go:31] will retry after 759.296258ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:48.271608  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:48.289666  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:16:48.289812  838391 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:16:48.289830  838391 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:48.289844  838391 fix.go:56] duration metric: took 3m3.146193546s for fixHost
	I0917 00:16:48.289858  838391 start.go:83] releasing machines lock for "ha-472903-m04", held for 3m3.146226948s
	W0917 00:16:48.289881  838391 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:16:48.289975  838391 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:48.289987  838391 start.go:729] Will try again in 5 seconds ...
	I0917 00:16:53.290141  838391 start.go:360] acquireMachinesLock for ha-472903-m04: {Name:mkdbbd0d5b3cd7ad4b13d37f2d562d6d6421c5de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:16:53.290272  838391 start.go:364] duration metric: took 94.983µs to acquireMachinesLock for "ha-472903-m04"
	I0917 00:16:53.290297  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:16:53.290303  838391 fix.go:54] fixHost starting: m04
	I0917 00:16:53.290646  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:16:53.309611  838391 fix.go:112] recreateIfNeeded on ha-472903-m04: state=Stopped err=<nil>
	W0917 00:16:53.309640  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:16:53.311233  838391 out.go:252] * Restarting existing docker container for "ha-472903-m04" ...
	I0917 00:16:53.311300  838391 cli_runner.go:164] Run: docker start ha-472903-m04
	I0917 00:16:53.541222  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:16:53.560095  838391 kic.go:430] container "ha-472903-m04" state is running.
	I0917 00:16:53.560573  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m04
	I0917 00:16:53.580208  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:16:53.580538  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:16:53.580642  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	I0917 00:16:53.599573  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:16:53.599853  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33594 <nil> <nil>}
	I0917 00:16:53.599867  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:16:53.600481  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36824->127.0.0.1:33594: read: connection reset by peer
	I0917 00:16:56.602700  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:16:59.603638  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:02.605644  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:05.607721  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:08.608037  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:11.609632  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:14.610658  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:17.612855  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:20.613697  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:23.614397  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:26.616706  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:29.617175  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:32.618651  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:35.620635  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:38.621502  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:41.622948  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:44.624290  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:47.624933  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:50.625690  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:53.626092  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:56.628195  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:59.629019  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:02.631303  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:05.632822  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:08.633316  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:11.635679  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:14.636798  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:17.638657  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:20.639654  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:23.640721  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:26.642651  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:29.643601  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:32.645639  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:35.647624  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:38.648379  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:41.650676  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:44.651634  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:47.653582  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:50.654648  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:53.655970  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:56.658210  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:59.658941  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:02.661113  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:05.663405  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:08.664478  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:11.666153  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:14.667567  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:17.668447  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:20.668923  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:23.669615  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:26.671877  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:29.673145  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:32.674637  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:35.677064  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:38.678152  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:41.680118  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:44.681450  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:47.682442  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:50.682884  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:53.683789  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:19:53.683836  838391 ubuntu.go:182] provisioning hostname "ha-472903-m04"
	I0917 00:19:53.683924  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:53.702821  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:53.702901  838391 machine.go:96] duration metric: took 3m0.122343923s to provisionDockerMachine
	I0917 00:19:53.702985  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:19:53.703018  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:53.720196  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:53.720349  838391 retry.go:31] will retry after 273.264226ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:53.994608  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:54.012758  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:54.012877  838391 retry.go:31] will retry after 451.557634ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:54.465611  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:54.483957  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:54.484069  838391 retry.go:31] will retry after 372.513327ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:54.857680  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:54.875097  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:19:54.875215  838391 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:19:54.875229  838391 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:54.875274  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:19:54.875305  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:54.892677  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:54.892775  838391 retry.go:31] will retry after 244.26035ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:55.137223  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:55.156010  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:55.156141  838391 retry.go:31] will retry after 195.694179ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:55.352609  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:55.370515  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:55.370623  838391 retry.go:31] will retry after 349.362306ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:55.720142  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:55.737839  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:55.737968  838391 retry.go:31] will retry after 818.87418ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:56.557986  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:56.575881  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:19:56.576024  838391 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:19:56.576041  838391 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:56.576050  838391 fix.go:56] duration metric: took 3m3.285747581s for fixHost
	I0917 00:19:56.576057  838391 start.go:83] releasing machines lock for "ha-472903-m04", held for 3m3.285773333s
	W0917 00:19:56.576146  838391 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-472903" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:56.578148  838391 out.go:203] 
	W0917 00:19:56.579015  838391 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:19:56.579029  838391 out.go:285] * 
	W0917 00:19:56.580824  838391 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:19:56.581780  838391 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c8a737e1be33c       6e38f40d628db       5 minutes ago       Running             storage-provisioner       4                   fe7a407d2eb97       storage-provisioner
	2a56abb41f49d       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   2c028f64de7ca       kindnet-lh7dv
	b4ccada04ba90       8c811b4aec35f       6 minutes ago       Running             busybox                   1                   8196f32c07b91       busybox-7b57f96db7-6hrm6
	aeea8f1127caf       52546a367cc9e       6 minutes ago       Running             coredns                   1                   91d98fd766ced       coredns-66bc5c9577-qn8m7
	9fc46931c7aae       52546a367cc9e       6 minutes ago       Running             coredns                   1                   5e2ab87af7d54       coredns-66bc5c9577-c94hz
	360a9ae449a3a       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       3                   fe7a407d2eb97       storage-provisioner
	b1c8344888d7d       df0860106674d       6 minutes ago       Running             kube-proxy                1                   b64b7dfe57cfc       kube-proxy-d4m8f
	6ce9c5e712887       765655ea60781       6 minutes ago       Running             kube-vip                  0                   1bc9d50f267a3       kube-vip-ha-472903
	9685cc588651c       46169d968e920       6 minutes ago       Running             kube-scheduler            1                   50f4cca94a4f8       kube-scheduler-ha-472903
	c3f8ee22fca28       a0af72f2ec6d6       6 minutes ago       Running             kube-controller-manager   1                   811d527e0af1e       kube-controller-manager-ha-472903
	96d46a46d9093       90550c43ad2bc       6 minutes ago       Running             kube-apiserver            1                   9fcac3d988698       kube-apiserver-ha-472903
	90b187ed887fa       5f1f5298c888d       6 minutes ago       Running             etcd                      1                   070db27b7a5dd       etcd-ha-472903
	0a41d8b587e02       8c811b4aec35f       21 minutes ago      Exited              busybox                   0                   a2422ee3e6e6d       busybox-7b57f96db7-6hrm6
	9f103b05d2d6f       52546a367cc9e       22 minutes ago      Exited              coredns                   0                   9579263342827       coredns-66bc5c9577-c94hz
	3b457407f10e3       52546a367cc9e       22 minutes ago      Exited              coredns                   0                   290cfb537788e       coredns-66bc5c9577-qn8m7
	cc69d2451cb65       409467f978b4a       23 minutes ago      Exited              kindnet-cni               0                   3e17d6ae9b2a6       kindnet-lh7dv
	92dd4d116eb03       df0860106674d       23 minutes ago      Exited              kube-proxy                0                   8c0ecd5301326       kube-proxy-d4m8f
	bba28cace6502       46169d968e920       23 minutes ago      Exited              kube-scheduler            0                   f18dd7697c60f       kube-scheduler-ha-472903
	087290a41f59c       a0af72f2ec6d6       23 minutes ago      Exited              kube-controller-manager   0                   0760ebe1d2a56       kube-controller-manager-ha-472903
	0aba62132d764       90550c43ad2bc       23 minutes ago      Exited              kube-apiserver            0                   8ad1fa8bc0267       kube-apiserver-ha-472903
	23c0af0bdbe95       5f1f5298c888d       23 minutes ago      Exited              etcd                      0                   b01a62742caec       etcd-ha-472903
	
	
	==> containerd <==
	Sep 17 00:14:06 ha-472903 containerd[478]: time="2025-09-17T00:14:06.742622145Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 17 00:14:07 ha-472903 containerd[478]: time="2025-09-17T00:14:07.230475449Z" level=info msg="RemoveContainer for \"5a5a17cca6c0a643b6c0881dab5508dcb7de8e6ad77d7e6ecb81d434ab2cc8a1\""
	Sep 17 00:14:07 ha-472903 containerd[478]: time="2025-09-17T00:14:07.235120578Z" level=info msg="RemoveContainer for \"5a5a17cca6c0a643b6c0881dab5508dcb7de8e6ad77d7e6ecb81d434ab2cc8a1\" returns successfully"
	Sep 17 00:14:20 ha-472903 containerd[478]: time="2025-09-17T00:14:20.057131193Z" level=info msg="CreateContainer within sandbox \"fe7a407d2eb97d648dbca1e85a5587efe15f437488c6dd3ef99c90d4b44796b2\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:4,}"
	Sep 17 00:14:20 ha-472903 containerd[478]: time="2025-09-17T00:14:20.067657299Z" level=info msg="CreateContainer within sandbox \"fe7a407d2eb97d648dbca1e85a5587efe15f437488c6dd3ef99c90d4b44796b2\" for &ContainerMetadata{Name:storage-provisioner,Attempt:4,} returns container id \"c8a737e1be33c6e4b6e17f5359483d22d3eeb7ca2497546109c1097eb9343a7f\""
	Sep 17 00:14:20 ha-472903 containerd[478]: time="2025-09-17T00:14:20.068219427Z" level=info msg="StartContainer for \"c8a737e1be33c6e4b6e17f5359483d22d3eeb7ca2497546109c1097eb9343a7f\""
	Sep 17 00:14:20 ha-472903 containerd[478]: time="2025-09-17T00:14:20.127854739Z" level=info msg="StartContainer for \"c8a737e1be33c6e4b6e17f5359483d22d3eeb7ca2497546109c1097eb9343a7f\" returns successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.048175952Z" level=info msg="RemoveContainer for \"8683544e2a9d579448e28b8f33653e2c8d1315b2d07bd7b4ce574428d93c6f3a\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.051943763Z" level=info msg="RemoveContainer for \"8683544e2a9d579448e28b8f33653e2c8d1315b2d07bd7b4ce574428d93c6f3a\" returns successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.053740259Z" level=info msg="StopPodSandbox for \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.053865890Z" level=info msg="TearDown network for sandbox \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\" successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.053890854Z" level=info msg="StopPodSandbox for \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\" returns successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.054466776Z" level=info msg="RemovePodSandbox for \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.054510253Z" level=info msg="Forcibly stopping sandbox \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.054597568Z" level=info msg="TearDown network for sandbox \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\" successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.058233686Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.058306846Z" level=info msg="RemovePodSandbox \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\" returns successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.058804033Z" level=info msg="StopPodSandbox for \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.058920078Z" level=info msg="TearDown network for sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.058947458Z" level=info msg="StopPodSandbox for \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" returns successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.059233694Z" level=info msg="RemovePodSandbox for \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.059274964Z" level=info msg="Forcibly stopping sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.059351772Z" level=info msg="TearDown network for sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.062137499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.062200412Z" level=info msg="RemovePodSandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" returns successfully"
	
	
	==> coredns [3b457407f10e357ce33da7fa3fb4333f8312f0d3e3570cf8528cdcac8f5a1d0f] <==
	[INFO] 10.244.1.2:53799 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.009949044s
	[INFO] 10.244.0.4:39485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157098s
	[INFO] 10.244.0.4:57871 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000750185s
	[INFO] 10.244.0.4:53410 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000089028s
	[INFO] 10.244.1.2:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150317s
	[INFO] 10.244.1.2:59346 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028128363s
	[INFO] 10.244.1.2:43091 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01004668s
	[INFO] 10.244.1.2:37227 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000191819s
	[INFO] 10.244.1.2:40079 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125376s
	[INFO] 10.244.0.4:38168 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181114s
	[INFO] 10.244.0.4:60067 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000087147s
	[INFO] 10.244.0.4:47611 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122939s
	[INFO] 10.244.0.4:37626 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121195s
	[INFO] 10.244.1.2:42817 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159509s
	[INFO] 10.244.1.2:33910 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186538s
	[INFO] 10.244.1.2:37929 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109836s
	[INFO] 10.244.0.4:50698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212263s
	[INFO] 10.244.0.4:33166 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100167s
	[INFO] 10.244.1.2:50377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157558s
	[INFO] 10.244.1.2:39491 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132025s
	[INFO] 10.244.1.2:50075 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112028s
	[INFO] 10.244.0.4:58743 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149175s
	[INFO] 10.244.0.4:52796 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114946s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45239 - 14115 "HINFO IN 5883645869461503498.3950535614037284853. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058516241s
	[INFO] 10.244.1.2:55352 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003252862s
	[INFO] 10.244.0.4:33650 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001640931s
	[INFO] 10.244.0.4:50077 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000621363s
	[INFO] 10.244.1.2:48439 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189187s
	[INFO] 10.244.1.2:39582 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151327s
	[INFO] 10.244.1.2:59539 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140715s
	[INFO] 10.244.0.4:42999 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177514s
	[INFO] 10.244.0.4:36769 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010694753s
	[INFO] 10.244.0.4:53074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158932s
	[INFO] 10.244.0.4:57223 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012213s
	[INFO] 10.244.1.2:50810 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176678s
	[INFO] 10.244.0.4:58045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142445s
	[INFO] 10.244.0.4:39777 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123555s
	[INFO] 10.244.1.2:59022 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148853s
	[INFO] 10.244.0.4:45136 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001657s
	[INFO] 10.244.0.4:37711 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134332s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9fc46931c7aae5fea2058b723439b03184beee352ff9a7efcf262818181a635d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60683 - 9436 "HINFO IN 7751308179169184926.6829077423459472962. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019258685s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [aeea8f1127caf7117ade119a9e492104789925a531209d0aba3022cd18cb7ce1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40200 - 1569 "HINFO IN 6158707635578374570.8737516254824064952. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.057247461s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-472903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_56_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:56:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:19:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:17:29 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:17:29 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:17:29 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:17:29 +0000   Tue, 16 Sep 2025 23:56:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-472903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 e92083047f3148b2867b7885ff1f4fb4
	  System UUID:                695af4c7-28fb-4299-9454-75db3262ca2c
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6hrm6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-66bc5c9577-c94hz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23m
	  kube-system                 coredns-66bc5c9577-qn8m7             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23m
	  kube-system                 etcd-ha-472903                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         23m
	  kube-system                 kindnet-lh7dv                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23m
	  kube-system                 kube-apiserver-ha-472903             250m (3%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-472903    200m (2%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-d4m8f                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-472903             100m (1%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-472903                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m20s                  kube-proxy       
	  Normal  Starting                 23m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)      kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  Starting                 23m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)      kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)      kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 23m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     23m                    kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                    kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                    kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           23m                    node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           8m1s                   node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  Starting                 6m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m28s (x8 over 6m28s)  kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m28s (x8 over 6m28s)  kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m28s (x7 over 6m28s)  kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m19s                  node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           6m19s                  node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	
	
	Name:               ha-472903-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:19:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:18:52 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:18:52 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:18:52 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:18:52 +0000   Tue, 16 Sep 2025 23:57:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-472903-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 0e1a1fb76ba244e2b9677af4de050ca0
	  System UUID:                85df9db8-f21a-4038-9f8c-4cc1d81dc0d5
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-4jfjt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 etcd-ha-472903-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         22m
	  kube-system                 kindnet-q7c7s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22m
	  kube-system                 kube-apiserver-ha-472903-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-ha-472903-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-58lkb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-ha-472903-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-vip-ha-472903-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 22m                    kube-proxy       
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  NodeAllocatableEnforced  8m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m7s (x7 over 8m7s)    kubelet          Node ha-472903-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m7s (x8 over 8m7s)    kubelet          Node ha-472903-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m7s (x8 over 8m7s)    kubelet          Node ha-472903-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m7s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m1s                   node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  Starting                 6m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m26s (x8 over 6m26s)  kubelet          Node ha-472903-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s (x8 over 6m26s)  kubelet          Node ha-472903-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s (x7 over 6m26s)  kubelet          Node ha-472903-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m19s                  node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           6m19s                  node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	
	
	Name:               ha-472903-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:19:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:13:41 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:13:41 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:13:41 +0000   Tue, 16 Sep 2025 23:57:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:13:41 +0000   Tue, 16 Sep 2025 23:57:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-472903-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9db5ec4d4ea40459487b6ebc64cdda9
	  System UUID:                7eb7f2ee-a32d-4876-a4ad-58f745b9c377
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-mknzs                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 etcd-ha-472903-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         22m
	  kube-system                 kindnet-x6twd                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22m
	  kube-system                 kube-apiserver-ha-472903-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-ha-472903-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-kn6nb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-ha-472903-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-vip-ha-472903-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode           8m1s                   node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode           6m19s                  node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  RegisteredNode           6m19s                  node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	  Normal  Starting                 6m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m19s (x8 over 6m19s)  kubelet          Node ha-472903-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s (x8 over 6m19s)  kubelet          Node ha-472903-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s (x7 over 6m19s)  kubelet          Node ha-472903-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-472903-m03 event: Registered Node ha-472903-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 e8 75 4b 01 57 08 06
	[  +0.025562] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[ +13.150028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 5c f0 26 cd ba 08 06
	[  +0.000341] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 20 90 fb f5 d8 08 06
	[ +28.639349] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 26 63 8d db 90 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[  +0.836892] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 cc 9b 52 38 94 08 06
	[  +0.080327] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	[Sep16 23:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[ +20.325550] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 39 4b 41 df 63 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[  +8.925776] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e cd c1 f7 dc c8 08 06
	[  +0.000373] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	
	
	==> etcd [23c0af0bdbe9526d53769461ed9f80d8c743b02e625b65cce39c888f5e7d4b4e] <==
	{"level":"warn","ts":"2025-09-17T00:13:08.865242Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017673679127,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-09-17T00:13:09.078092Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:09.078216Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:09.078269Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4061] sent MsgPreVote request to 3aa85cdcd5e5557b at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:09.078323Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4061] sent MsgPreVote request to ab9d0391dce79465 at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:09.078391Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:09.078467Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-09-17T00:13:09.366348Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017673679127,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-17T00:13:09.733983Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"2.00012067s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context deadline exceeded"}
	{"level":"info","ts":"2025-09-17T00:13:09.734106Z","caller":"traceutil/trace.go:172","msg":"trace[1703373101] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"2.000255365s","start":"2025-09-17T00:13:07.733837Z","end":"2025-09-17T00:13:09.734092Z","steps":["trace[1703373101] 'agreement among raft nodes before linearized reading'  (duration: 2.000119103s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:13:09.734220Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:13:07.733823Z","time spent":"2.000381887s","remote":"127.0.0.1:56470","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-09-17T00:13:09.824490Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"10.001550907s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-09-17T00:13:09.830580Z","caller":"traceutil/trace.go:172","msg":"trace[2000130708] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"10.007653802s","start":"2025-09-17T00:12:59.822907Z","end":"2025-09-17T00:13:09.830561Z","steps":["trace[2000130708] 'agreement among raft nodes before linearized reading'  (duration: 10.001549225s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:13:09.830689Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:12:59.822890Z","time spent":"10.007768318s","remote":"127.0.0.1:56876","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	2025/09/17 00:13:09 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-09-17T00:13:09.866876Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017673679127,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-17T00:13:10.366968Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017673679127,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-09-17T00:13:10.478109Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:10.478171Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:10.478195Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4061] sent MsgPreVote request to 3aa85cdcd5e5557b at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:10.478218Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4061] sent MsgPreVote request to ab9d0391dce79465 at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:10.478252Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:10.478278Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-09-17T00:13:10.720561Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:13:03.715662Z","time spent":"7.004893477s","remote":"127.0.0.1:56646","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2025-09-17T00:13:10.867073Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017673679127,"retry-timeout":"500ms"}
	
	
	==> etcd [90b187ed887fae063d0e3d6e7f9316abbc50f1e7b9c092596b43a1c43c86e79d] <==
	{"level":"warn","ts":"2025-09-17T00:13:37.521638Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:13:37.525372Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:13:37.530268Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:13:37.533277Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:13:37.536238Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:13:37.546878Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:13:37.552602Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:13:37.575314Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:13:37.585643Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:13:37.625861Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:13:37.635458Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:13:37.646172Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:13:37.676267Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:13:37.685990Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:13:37.726302Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:13:37.736181Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"info","ts":"2025-09-17T00:13:39.651118Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"ab9d0391dce79465","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-17T00:13:39.651167Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:13:39.651199Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:13:39.653639Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"ab9d0391dce79465","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-17T00:13:39.653688Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:13:39.662722Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:13:39.663230Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-17T00:13:39.862686Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ab9d0391dce79465","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:13:39.862713Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ab9d0391dce79465","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	
	
	==> kernel <==
	 00:19:58 up  3:02,  0 users,  load average: 0.36, 0.73, 0.86
	Linux ha-472903 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [2a56abb41f49d6755de68bb41070eee7c07fee5950b2584042a3850228b3c274] <==
	I0917 00:19:17.397702       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:19:27.392489       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:19:27.392527       1 main.go:301] handling current node
	I0917 00:19:27.392543       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:19:27.392548       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:19:27.392752       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:19:27.392765       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:19:37.390063       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:19:37.390101       1 main.go:301] handling current node
	I0917 00:19:37.390118       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:19:37.390123       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:19:37.390327       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:19:37.390339       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:19:47.397482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:19:47.397526       1 main.go:301] handling current node
	I0917 00:19:47.397543       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:19:47.397548       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:19:47.397996       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:19:47.398026       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:19:57.390658       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:19:57.390704       1 main.go:301] handling current node
	I0917 00:19:57.390723       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:19:57.390729       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:19:57.390896       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:19:57.391108       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [cc69d2451cb65860b5bc78e027be2fc1cb0f9fa6542b4abe3bc1ff1c90a8fe60] <==
	I0917 00:12:27.503889       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:12:37.507295       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:12:37.507338       1 main.go:301] handling current node
	I0917 00:12:37.507353       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:12:37.507359       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:12:37.507565       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:12:37.507578       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:12:47.503578       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:12:47.503630       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:12:47.503841       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:12:47.503857       1 main.go:301] handling current node
	I0917 00:12:47.503874       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:12:47.503882       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:12:57.503552       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:12:57.503592       1 main.go:301] handling current node
	I0917 00:12:57.503612       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:12:57.503618       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:12:57.504021       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:12:57.504066       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:13:07.510512       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:13:07.510552       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:13:07.511170       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:13:07.511196       1 main.go:301] handling current node
	I0917 00:13:07.511281       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:13:07.511312       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0aba62132d764965d8e1a80a4a6345bb7e34892b23143da4a7af3450cd465d6c] <==
	E0917 00:13:11.166753       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.166775       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.166780       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.166731       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.166754       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.167368       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.167554       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.167606       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.167640       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.167659       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168321       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168332       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168355       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168358       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168761       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168807       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168826       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168844       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168845       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168866       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168873       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168898       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.169017       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.169052       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.169077       1 watcher.go:335] watch chan error: etcdserver: no leader
	
	
	==> kube-apiserver [96d46a46d90937e1dc254cbb641e1f12887151faabbe128f2cc51a8a833fe573] <==
	I0917 00:13:35.109530       1 aggregator.go:171] initial CRD sync complete...
	I0917 00:13:35.109558       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 00:13:35.109566       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 00:13:35.109573       1 cache.go:39] Caches are synced for autoregister controller
	W0917 00:13:35.114733       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3 192.168.49.4]
	I0917 00:13:35.116809       1 controller.go:667] quota admission added evaluator for: endpoints
	I0917 00:13:35.117772       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0917 00:13:35.127627       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0917 00:13:35.133999       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0917 00:13:35.156218       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 00:13:35.994627       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0917 00:13:36.160405       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W0917 00:13:36.454299       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I0917 00:13:38.437732       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:13:38.895584       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0917 00:14:14.427245       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0917 00:14:34.638077       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:14:55.389838       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:15:41.589543       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:16:00.249213       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:16:50.539266       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:17:30.019039       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:18:00.900712       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:18:53.314317       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:19:24.721832       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [087290a41f59caa4f9bc89759bcec6cf90f47c8a2ab83b7c671a8fff35641df9] <==
	I0916 23:56:54.728442       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0916 23:56:54.728466       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:56:54.728485       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0916 23:56:54.728644       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0916 23:56:54.728665       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0916 23:56:54.728648       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0916 23:56:54.728914       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0916 23:56:54.730175       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0916 23:56:54.730201       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0916 23:56:54.732432       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:56:54.733452       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:56:54.735655       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:56:54.735714       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:56:54.735760       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:56:54.735767       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:56:54.735772       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:56:54.740680       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903" podCIDRs=["10.244.0.0/24"]
	I0916 23:56:54.749950       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:22.933124       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m02\" does not exist"
	I0916 23:57:22.943785       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:24.681339       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m02"
	I0916 23:57:51.749676       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m03\" does not exist"
	I0916 23:57:51.772476       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m03" podCIDRs=["10.244.2.0/24"]
	E0916 23:57:51.829801       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"3f5da9fc-6769-4ca8-a715-edeace44c646\", ResourceVersion:\"594\", Generation:1, CreationTimestamp:time.Date(2025, time.September, 16, 23, 56, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00222d0e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"
\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSourc
e)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0021ed7c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdcf8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtu
alDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.34.0\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00144a7e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Re
sourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Life
cycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0019549c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001900b18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ba1200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", Tole
rationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e14570)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001900b70)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailab
le:2, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:57:54.685322       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m03"
	
	
	==> kube-controller-manager [c3f8ee22fca28b303f553c3003d1000b80565b4147ba719401c8c5f61921ee41] <==
	I0917 00:13:38.427005       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:13:38.427138       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0917 00:13:38.428331       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0917 00:13:38.431473       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0917 00:13:38.431610       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0917 00:13:38.431764       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0917 00:13:38.431826       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:13:38.431860       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0917 00:13:38.431926       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0917 00:13:38.431992       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0917 00:13:38.432765       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0917 00:13:38.432816       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0917 00:13:38.432831       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:13:38.432867       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0917 00:13:38.432870       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 00:13:38.433430       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0917 00:13:38.433549       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 00:13:38.433648       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903"
	I0917 00:13:38.433689       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m02"
	I0917 00:13:38.433719       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m03"
	I0917 00:13:38.433784       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 00:13:38.434607       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0917 00:13:38.436471       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0917 00:13:38.443120       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:13:38.447017       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	
	
	==> kube-proxy [92dd4d116eb0387dded82fb32d35690ec2d00e3f5e7ac81bf7aea0c6814edd5e] <==
	I0916 23:56:56.831012       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:56:56.891635       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:56:56.991820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:56:56.991862       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:56:56.991952       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:56:57.015955       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:56:57.016001       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:56:57.021120       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:56:57.021457       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:56:57.021499       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:56:57.024872       1 config.go:200] "Starting service config controller"
	I0916 23:56:57.024892       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:56:57.024900       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:56:57.024909       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:56:57.024890       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:56:57.024917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:56:57.024937       1 config.go:309] "Starting node config controller"
	I0916 23:56:57.024942       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:56:57.125608       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:56:57.125691       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0916 23:56:57.125856       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:56:57.125902       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [b1c8344888d7deab1a3203bf9e16eefcb945905ec04b591acfb2fed3104948ec] <==
	I0917 00:13:36.733439       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:13:36.818219       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:13:36.918912       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:13:36.918966       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:13:36.919071       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:13:36.942838       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:13:36.942910       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:13:36.949958       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:13:36.950427       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:13:36.950467       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:13:36.954376       1 config.go:200] "Starting service config controller"
	I0917 00:13:36.954506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:13:36.954587       1 config.go:309] "Starting node config controller"
	I0917 00:13:36.954660       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:13:36.954669       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:13:36.954703       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:13:36.954712       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:13:36.954729       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:13:36.954736       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:13:37.054981       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:13:37.055026       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:13:37.055057       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9685cc588651ced2d51ab783a94533fff6a60971435eaa8e11982eb715ef5350] <==
	I0917 00:13:30.068882       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:13:35.071453       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:13:35.071492       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:13:35.090261       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:13:35.090310       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:13:35.090614       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:13:35.090722       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:13:35.090743       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:13:35.090760       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:13:35.094479       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0917 00:13:35.094536       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0917 00:13:35.190629       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:13:35.191303       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:13:35.194926       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kube-scheduler [bba28cace6502de93aa43db4fb51671581c5074990dea721d98d36d839734a67] <==
	E0916 23:56:48.619869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:56:48.649766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:56:48.673092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I0916 23:56:49.170967       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 23:57:51.780040       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:57:51.780142       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	E0916 23:57:51.780183       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	I0916 23:57:51.782132       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:58:37.948695       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	E0916 23:58:37.948846       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 565a634f-ab41-4776-ba5d-63a601bfec48(default/busybox-7b57f96db7-x6xc9) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	E0916 23:58:37.948875       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	I0916 23:58:37.950251       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	I0916 23:58:37.966099       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="47b06c15-c007-4c50-a248-5411a0f4b6a7" pod="default/busybox-7b57f96db7-4jfjt" assumedNode="ha-472903-m02" currentNode="ha-472903"
	E0916 23:58:37.968241       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903"
	E0916 23:58:37.968351       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 47b06c15-c007-4c50-a248-5411a0f4b6a7(default/busybox-7b57f96db7-4jfjt) was assumed on ha-472903 but assigned to ha-472903-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	E0916 23:58:37.968376       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	I0916 23:58:37.969472       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903-m02"
	E0916 23:58:38.002469       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-wp95z" node="ha-472903"
	E0916 23:58:38.002779       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:38.046394       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-xnrsc\" not found" pod="default/busybox-7b57f96db7-xnrsc"
	E0916 23:58:38.046880       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-wp95z\" not found" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:40.050124       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	E0916 23:58:40.050213       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod bd03bad4-af1e-42d0-81fb-6fcaeaa8775e(default/busybox-7b57f96db7-6hrm6) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	E0916 23:58:40.050248       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	I0916 23:58:40.051853       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	
	
	==> kubelet <==
	Sep 17 00:13:35 ha-472903 kubelet[620]: I0917 00:13:35.179855     620 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-472903"
	Sep 17 00:13:35 ha-472903 kubelet[620]: E0917 00:13:35.187290     620 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-472903\" already exists" pod="kube-system/etcd-ha-472903"
	Sep 17 00:13:35 ha-472903 kubelet[620]: I0917 00:13:35.187325     620 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-472903"
	Sep 17 00:13:35 ha-472903 kubelet[620]: E0917 00:13:35.196172     620 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-472903\" already exists" pod="kube-system/kube-apiserver-ha-472903"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.029595     620 apiserver.go:52] "Watching apiserver"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.036032     620 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-472903" podUID="ccdab212-cf0c-4bf0-958b-173e1008f7bc"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.052303     620 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-472903"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.052325     620 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-472903"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.131204     620 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.137227     620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-472903" podStartSLOduration=0.137196984 podStartE2EDuration="137.196984ms" podCreationTimestamp="2025-09-17 00:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-17 00:13:36.118811818 +0000 UTC m=+7.151916686" watchObservedRunningTime="2025-09-17 00:13:36.137196984 +0000 UTC m=+7.170301850"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.155169     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4a70eec-48a7-4ea6-871a-1b5ed2beca9a-xtables-lock\") pod \"kube-proxy-d4m8f\" (UID: \"d4a70eec-48a7-4ea6-871a-1b5ed2beca9a\") " pod="kube-system/kube-proxy-d4m8f"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.156175     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1da43ca7-9af7-4573-9cdc-fd21b098ca2c-cni-cfg\") pod \"kindnet-lh7dv\" (UID: \"1da43ca7-9af7-4573-9cdc-fd21b098ca2c\") " pod="kube-system/kindnet-lh7dv"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.156592     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1da43ca7-9af7-4573-9cdc-fd21b098ca2c-lib-modules\") pod \"kindnet-lh7dv\" (UID: \"1da43ca7-9af7-4573-9cdc-fd21b098ca2c\") " pod="kube-system/kindnet-lh7dv"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.156960     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4a70eec-48a7-4ea6-871a-1b5ed2beca9a-lib-modules\") pod \"kube-proxy-d4m8f\" (UID: \"d4a70eec-48a7-4ea6-871a-1b5ed2beca9a\") " pod="kube-system/kube-proxy-d4m8f"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.157372     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1da43ca7-9af7-4573-9cdc-fd21b098ca2c-xtables-lock\") pod \"kindnet-lh7dv\" (UID: \"1da43ca7-9af7-4573-9cdc-fd21b098ca2c\") " pod="kube-system/kindnet-lh7dv"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.157474     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ac7f283e-4d28-46cf-a519-bd227237d5e7-tmp\") pod \"storage-provisioner\" (UID: \"ac7f283e-4d28-46cf-a519-bd227237d5e7\") " pod="kube-system/storage-provisioner"
	Sep 17 00:13:37 ha-472903 kubelet[620]: I0917 00:13:37.056986     620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="021b917bf994c60a5ce7bb1b5d713b5b" path="/var/lib/kubelet/pods/021b917bf994c60a5ce7bb1b5d713b5b/volumes"
	Sep 17 00:13:38 ha-472903 kubelet[620]: I0917 00:13:38.149724     620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 17 00:13:44 ha-472903 kubelet[620]: I0917 00:13:44.396062     620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 17 00:13:44 ha-472903 kubelet[620]: I0917 00:13:44.750098     620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 17 00:14:07 ha-472903 kubelet[620]: I0917 00:14:07.229109     620 scope.go:117] "RemoveContainer" containerID="5a5a17cca6c0a643b6c0881dab5508dcb7de8e6ad77d7e6ecb81d434ab2cc8a1"
	Sep 17 00:14:07 ha-472903 kubelet[620]: I0917 00:14:07.229537     620 scope.go:117] "RemoveContainer" containerID="360a9ae449a3affbb5373c19b5e7e14e1da3ec8397f5e21f1d3c31e298455266"
	Sep 17 00:14:07 ha-472903 kubelet[620]: E0917 00:14:07.229764     620 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ac7f283e-4d28-46cf-a519-bd227237d5e7)\"" pod="kube-system/storage-provisioner" podUID="ac7f283e-4d28-46cf-a519-bd227237d5e7"
	Sep 17 00:14:20 ha-472903 kubelet[620]: I0917 00:14:20.052702     620 scope.go:117] "RemoveContainer" containerID="360a9ae449a3affbb5373c19b5e7e14e1da3ec8397f5e21f1d3c31e298455266"
	Sep 17 00:14:29 ha-472903 kubelet[620]: I0917 00:14:29.046747     620 scope.go:117] "RemoveContainer" containerID="8683544e2a9d579448e28b8f33653e2c8d1315b2d07bd7b4ce574428d93c6f3a"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-472903 -n ha-472903
helpers_test.go:269: (dbg) Run:  kubectl --context ha-472903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-mknzs
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-472903 describe pod busybox-7b57f96db7-mknzs
helpers_test.go:290: (dbg) kubectl --context ha-472903 describe pod busybox-7b57f96db7-mknzs:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-mknzs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ha-472903-m03/192.168.49.4
	Start Time:       Tue, 16 Sep 2025 23:58:37 +0000
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmz92 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gmz92:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                  Age                   From               Message
	  ----     ------                  ----                  ----               -------
	  Warning  FailedScheduling        21m                   default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-mknzs": pod busybox-7b57f96db7-mknzs is already assigned to node "ha-472903-m03"
	  Warning  FailedScheduling        21m                   default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-mknzs": pod busybox-7b57f96db7-mknzs is already assigned to node "ha-472903-m03"
	  Normal   Scheduled               21m                   default-scheduler  Successfully assigned default/busybox-7b57f96db7-mknzs to ha-472903-m03
	  Warning  FailedCreatePodSandBox  21m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "72439adc47052c2da00cee62587d780275cf6c2423dee9831567464d4725ee9d": failed to find network info for sandbox "72439adc47052c2da00cee62587d780275cf6c2423dee9831567464d4725ee9d"
	  Warning  FailedCreatePodSandBox  21m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "24ab8b6bd2f38653d2326c375fc81ebf17317e36885547c7b42c011bb95889ed": failed to find network info for sandbox "24ab8b6bd2f38653d2326c375fc81ebf17317e36885547c7b42c011bb95889ed"
	  Warning  FailedCreatePodSandBox  20m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "300fece4c100bc3e68a19e1fa6f46c8a378753727caaaeb1533dab71f234be58": failed to find network info for sandbox "300fece4c100bc3e68a19e1fa6f46c8a378753727caaaeb1533dab71f234be58"
	  Warning  FailedCreatePodSandBox  20m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e49a14b4de5e24fa450a43c124b2916ad7028d35cbc3b0f74595e68ee161d1d0": failed to find network info for sandbox "e49a14b4de5e24fa450a43c124b2916ad7028d35cbc3b0f74595e68ee161d1d0"
	  Warning  FailedCreatePodSandBox  20m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "efa290ca498f7c70ae29d8d97709edda97bc6b062aac05a3ef6d6a83fbd42797": failed to find network info for sandbox "efa290ca498f7c70ae29d8d97709edda97bc6b062aac05a3ef6d6a83fbd42797"
	  Warning  FailedCreatePodSandBox  20m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d5851ce1270b1c8994400ecd7bdabadaf895488957ffb5173dcd7e289db1de6c": failed to find network info for sandbox "d5851ce1270b1c8994400ecd7bdabadaf895488957ffb5173dcd7e289db1de6c"
	  Warning  FailedCreatePodSandBox  20m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "11aaa894ae434b08da8122c8f3445d03b4c1e54dfb071596f63a0e4654f49f10": failed to find network info for sandbox "11aaa894ae434b08da8122c8f3445d03b4c1e54dfb071596f63a0e4654f49f10"
	  Warning  FailedCreatePodSandBox  19m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c8126e80126ff891a4935c60cfec55753f6bb51d789c0eb46098b72267c7d53c": failed to find network info for sandbox "c8126e80126ff891a4935c60cfec55753f6bb51d789c0eb46098b72267c7d53c"
	  Warning  FailedCreatePodSandBox  19m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1389a2f92f350a6f495c76f80031300b6442a6a0cc67abd4b045ff9150b3fc3a": failed to find network info for sandbox "1389a2f92f350a6f495c76f80031300b6442a6a0cc67abd4b045ff9150b3fc3a"
	  Warning  FailedCreatePodSandBox  11m (x38 over 19m)    kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c3a9afe91461f3ea405980387ac5fab85785c7cf3f180d2b0f894e1df94ca62d": failed to find network info for sandbox "c3a9afe91461f3ea405980387ac5fab85785c7cf3f180d2b0f894e1df94ca62d"
	  Warning  FailedCreatePodSandBox  6m17s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "cda2dbf0c972c31f968537f7c9f21418241587330d1d712cfb4a0ba3e550c20a": failed to find network info for sandbox "cda2dbf0c972c31f968537f7c9f21418241587330d1d712cfb4a0ba3e550c20a"
	  Warning  FailedCreatePodSandBox  6m6s                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "14baf71b9dc7d984b12f55350924cb6137132787926dfa3eb49fe12f9e9732e0": failed to find network info for sandbox "14baf71b9dc7d984b12f55350924cb6137132787926dfa3eb49fe12f9e9732e0"
	  Warning  FailedCreatePodSandBox  5m55s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "4fda22fa356242a6303e742b7926a1140b8be05b55bcea00b899022d060df95e": failed to find network info for sandbox "4fda22fa356242a6303e742b7926a1140b8be05b55bcea00b899022d060df95e"
	  Warning  FailedCreatePodSandBox  5m40s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "37f9385599ac18efa02dd03c7047c8b3e6a75a8b7f91dfe8484fddced2a824ed": failed to find network info for sandbox "37f9385599ac18efa02dd03c7047c8b3e6a75a8b7f91dfe8484fddced2a824ed"
	  Warning  FailedCreatePodSandBox  5m27s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0b148ebd3b8e97d4391af8b47620ef0885784970674f3476da8221da65098065": failed to find network info for sandbox "0b148ebd3b8e97d4391af8b47620ef0885784970674f3476da8221da65098065"
	  Warning  FailedCreatePodSandBox  5m12s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a5e01c8f5b92cf42b925fff20aede6e1ce9750c1dee2a430c1fac568b003a150": failed to find network info for sandbox "a5e01c8f5b92cf42b925fff20aede6e1ce9750c1dee2a430c1fac568b003a150"
	  Warning  FailedCreatePodSandBox  4m58s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "eaf60379ea9737447a0ced0b3acbf261f20a8637d61915033bcd13aa3d55f0e7": failed to find network info for sandbox "eaf60379ea9737447a0ced0b3acbf261f20a8637d61915033bcd13aa3d55f0e7"
	  Warning  FailedCreatePodSandBox  4m46s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c8b861c6f31cb04b10441261fa1d4ab277c779cbb3e4207b875443492787579b": failed to find network info for sandbox "c8b861c6f31cb04b10441261fa1d4ab277c779cbb3e4207b875443492787579b"
	  Warning  FailedCreatePodSandBox  4m32s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "3bc31343ae5e828e0f218c4e2af9baae994944ce9ee4778ff05e9186d81c0368": failed to find network info for sandbox "3bc31343ae5e828e0f218c4e2af9baae994944ce9ee4778ff05e9186d81c0368"
	  Warning  FailedCreatePodSandBox  41s (x17 over 4m18s)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2a3096b683e87898b473a0ce3027c341e590629e445b40626170b29c13e76fc5": failed to find network info for sandbox "2a3096b683e87898b473a0ce3027c341e590629e445b40626170b29c13e76fc5"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (421.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-472903 node delete m03 --alsologtostderr -v 5: (6.070102662s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5: exit status 7 (503.947583ms)

                                                
                                                
-- stdout --
	ha-472903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472903-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:20:05.170283  850369 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:20:05.170375  850369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:20:05.170383  850369 out.go:374] Setting ErrFile to fd 2...
	I0917 00:20:05.170387  850369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:20:05.170615  850369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:20:05.170779  850369 out.go:368] Setting JSON to false
	I0917 00:20:05.170799  850369 mustload.go:65] Loading cluster: ha-472903
	I0917 00:20:05.170876  850369 notify.go:220] Checking for updates...
	I0917 00:20:05.171186  850369 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:05.171216  850369 status.go:174] checking status of ha-472903 ...
	I0917 00:20:05.171686  850369 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:20:05.190045  850369 status.go:371] ha-472903 host status = "Running" (err=<nil>)
	I0917 00:20:05.190101  850369 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:20:05.190373  850369 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:20:05.207082  850369 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:20:05.207292  850369 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:20:05.207347  850369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:05.225477  850369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:20:05.317353  850369 ssh_runner.go:195] Run: systemctl --version
	I0917 00:20:05.321663  850369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:20:05.332935  850369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:20:05.387878  850369 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:20:05.377070578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:20:05.388485  850369 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:20:05.388521  850369 api_server.go:166] Checking apiserver status ...
	I0917 00:20:05.388563  850369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:20:05.400398  850369 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1085/cgroup
	W0917 00:20:05.409700  850369 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1085/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:20:05.409748  850369 ssh_runner.go:195] Run: ls
	I0917 00:20:05.413123  850369 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:20:05.417246  850369 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:20:05.417263  850369 status.go:463] ha-472903 apiserver status = Running (err=<nil>)
	I0917 00:20:05.417273  850369 status.go:176] ha-472903 status: &{Name:ha-472903 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:20:05.417289  850369 status.go:174] checking status of ha-472903-m02 ...
	I0917 00:20:05.417536  850369 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:20:05.435969  850369 status.go:371] ha-472903-m02 host status = "Running" (err=<nil>)
	I0917 00:20:05.435992  850369 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:20:05.436336  850369 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:20:05.452162  850369 host.go:66] Checking if "ha-472903-m02" exists ...
	I0917 00:20:05.452402  850369 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:20:05.452474  850369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:05.470012  850369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:20:05.562157  850369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:20:05.573809  850369 kubeconfig.go:125] found "ha-472903" server: "https://192.168.49.254:8443"
	I0917 00:20:05.573835  850369 api_server.go:166] Checking apiserver status ...
	I0917 00:20:05.573864  850369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:20:05.584792  850369 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/692/cgroup
	W0917 00:20:05.594016  850369 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/692/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:20:05.594061  850369 ssh_runner.go:195] Run: ls
	I0917 00:20:05.597398  850369 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:20:05.601510  850369 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:20:05.601535  850369 status.go:463] ha-472903-m02 apiserver status = Running (err=<nil>)
	I0917 00:20:05.601545  850369 status.go:176] ha-472903-m02 status: &{Name:ha-472903-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:20:05.601562  850369 status.go:174] checking status of ha-472903-m04 ...
	I0917 00:20:05.601840  850369 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:20:05.623264  850369 status.go:371] ha-472903-m04 host status = "Stopped" (err=<nil>)
	I0917 00:20:05.623282  850369 status.go:384] host is not running, skipping remaining checks
	I0917 00:20:05.623290  850369 status.go:176] ha-472903-m04 status: &{Name:ha-472903-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-472903
helpers_test.go:243: (dbg) docker inspect ha-472903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	        "Created": "2025-09-16T23:56:35.178831158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 838588,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:13:23.170247962Z",
	            "FinishedAt": "2025-09-17T00:13:22.548619261Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hostname",
	        "HostsPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hosts",
	        "LogPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047-json.log",
	        "Name": "/ha-472903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-472903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-472903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	                "LowerDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-472903",
	                "Source": "/var/lib/docker/volumes/ha-472903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-472903",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-472903",
	                "name.minikube.sigs.k8s.io": "ha-472903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f681bc3451c2f9b5cdb2156ffcba04f0e713f66cdf73bde32e7115dbf471fa7b",
	            "SandboxKey": "/var/run/docker/netns/f681bc3451c2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33574"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33575"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33578"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33576"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33577"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-472903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:43:7c:dc:22:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "22d49b2f397dfabc2a3967bd54b05204a52976e683f65ff07bff00e793040bef",
	                    "EndpointID": "4140add73c3678ffb48555035c60424ac6e443ed664566963b98cd7acf01832d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-472903",
	                        "05f03528ecc5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-472903 -n ha-472903
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-472903 logs -n 25: (1.422286133s)
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m02 sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m02.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ cp      │ ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903-m04:/home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp testdata/cp-test.txt ha-472903-m04:/home/docker/cp-test.txt                                                             │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970888916/001/cp-test_ha-472903-m04.txt │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903:/home/docker/cp-test_ha-472903-m04_ha-472903.txt                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903.txt                                                 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m02:/home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m02 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m03:/home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ node    │ ha-472903 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ node    │ ha-472903 node start m02 --alsologtostderr -v 5                                                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ node    │ ha-472903 node list --alsologtostderr -v 5                                                                                           │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ stop    │ ha-472903 stop --alsologtostderr -v 5                                                                                                │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:13 UTC │
	│ start   │ ha-472903 start --wait true --alsologtostderr -v 5                                                                                   │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:13 UTC │                     │
	│ node    │ ha-472903 node list --alsologtostderr -v 5                                                                                           │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:19 UTC │                     │
	│ node    │ ha-472903 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:19 UTC │ 17 Sep 25 00:20 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:13:22
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:13:22.953197  838391 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:13:22.953530  838391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:13:22.953542  838391 out.go:374] Setting ErrFile to fd 2...
	I0917 00:13:22.953549  838391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:13:22.953766  838391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:13:22.954306  838391 out.go:368] Setting JSON to false
	I0917 00:13:22.955398  838391 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":10545,"bootTime":1758057458,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:13:22.955520  838391 start.go:140] virtualization: kvm guest
	I0917 00:13:22.957510  838391 out.go:179] * [ha-472903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:13:22.958615  838391 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:13:22.958642  838391 notify.go:220] Checking for updates...
	I0917 00:13:22.960507  838391 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:13:22.961674  838391 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:13:22.962866  838391 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0917 00:13:22.964443  838391 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:13:22.965391  838391 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:13:22.966891  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:22.966986  838391 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:13:22.992446  838391 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:13:22.992525  838391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:13:23.045449  838391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:13:23.034509691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:13:23.045556  838391 docker.go:318] overlay module found
	I0917 00:13:23.047016  838391 out.go:179] * Using the docker driver based on existing profile
	I0917 00:13:23.047922  838391 start.go:304] selected driver: docker
	I0917 00:13:23.047937  838391 start.go:918] validating driver "docker" against &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:13:23.048084  838391 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:13:23.048209  838391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:13:23.101147  838391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:13:23.091009521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:13:23.102012  838391 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:13:23.102057  838391 cni.go:84] Creating CNI manager for ""
	I0917 00:13:23.102129  838391 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:13:23.102195  838391 start.go:348] cluster config:
	{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:13:23.103903  838391 out.go:179] * Starting "ha-472903" primary control-plane node in "ha-472903" cluster
	I0917 00:13:23.104759  838391 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:13:23.105814  838391 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:13:23.106795  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:23.106833  838391 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0917 00:13:23.106844  838391 cache.go:58] Caching tarball of preloaded images
	I0917 00:13:23.106881  838391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:13:23.106921  838391 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:13:23.106932  838391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:13:23.107045  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:23.127051  838391 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:13:23.127078  838391 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:13:23.127093  838391 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:13:23.127117  838391 start.go:360] acquireMachinesLock for ha-472903: {Name:mk994658ce3314f2aed1dec341debc49d36a4326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:13:23.127173  838391 start.go:364] duration metric: took 38.444µs to acquireMachinesLock for "ha-472903"
	I0917 00:13:23.127192  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:13:23.127199  838391 fix.go:54] fixHost starting: 
	I0917 00:13:23.127403  838391 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:13:23.144605  838391 fix.go:112] recreateIfNeeded on ha-472903: state=Stopped err=<nil>
	W0917 00:13:23.144651  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:13:23.146403  838391 out.go:252] * Restarting existing docker container for "ha-472903" ...
	I0917 00:13:23.146471  838391 cli_runner.go:164] Run: docker start ha-472903
	I0917 00:13:23.362855  838391 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:13:23.380820  838391 kic.go:430] container "ha-472903" state is running.
	I0917 00:13:23.381209  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:13:23.398851  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:23.399057  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:13:23.399113  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:23.416213  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:23.416490  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I0917 00:13:23.416505  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:13:23.417056  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37384->127.0.0.1:33574: read: connection reset by peer
	I0917 00:13:26.554176  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0917 00:13:26.554202  838391 ubuntu.go:182] provisioning hostname "ha-472903"
	I0917 00:13:26.554275  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:26.572576  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:26.572800  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I0917 00:13:26.572813  838391 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903 && echo "ha-472903" | sudo tee /etc/hostname
	I0917 00:13:26.719562  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0917 00:13:26.719659  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:26.737757  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:26.738008  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I0917 00:13:26.738032  838391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:13:26.872954  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:13:26.872993  838391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:13:26.873020  838391 ubuntu.go:190] setting up certificates
	I0917 00:13:26.873033  838391 provision.go:84] configureAuth start
	I0917 00:13:26.873086  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:13:26.891066  838391 provision.go:143] copyHostCerts
	I0917 00:13:26.891111  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:26.891147  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:13:26.891169  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:26.891262  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:13:26.891384  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:26.891432  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:13:26.891443  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:26.891485  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:13:26.891575  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:26.891600  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:13:26.891610  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:26.891648  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:13:26.891725  838391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903 san=[127.0.0.1 192.168.49.2 ha-472903 localhost minikube]
	I0917 00:13:27.127844  838391 provision.go:177] copyRemoteCerts
	I0917 00:13:27.127908  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:13:27.127972  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.146507  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.243455  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:13:27.243525  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:13:27.269313  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:13:27.269382  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 00:13:27.294966  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:13:27.295048  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:13:27.320815  838391 provision.go:87] duration metric: took 447.761849ms to configureAuth
	I0917 00:13:27.320860  838391 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:13:27.321072  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:27.321085  838391 machine.go:96] duration metric: took 3.922015218s to provisionDockerMachine
	I0917 00:13:27.321092  838391 start.go:293] postStartSetup for "ha-472903" (driver="docker")
	I0917 00:13:27.321102  838391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:13:27.321150  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:13:27.321188  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.339742  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.437715  838391 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:13:27.441468  838391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:13:27.441498  838391 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:13:27.441506  838391 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:13:27.441513  838391 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:13:27.441524  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:13:27.441576  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:13:27.441647  838391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:13:27.441657  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0917 00:13:27.441747  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:13:27.451010  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:27.477190  838391 start.go:296] duration metric: took 156.078591ms for postStartSetup
	I0917 00:13:27.477273  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:13:27.477311  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.495838  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.588631  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:13:27.593367  838391 fix.go:56] duration metric: took 4.46615876s for fixHost
	I0917 00:13:27.593398  838391 start.go:83] releasing machines lock for "ha-472903", held for 4.466212718s
	I0917 00:13:27.593488  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:13:27.611894  838391 ssh_runner.go:195] Run: cat /version.json
	I0917 00:13:27.611963  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.611984  838391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:13:27.612068  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.630790  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.632015  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.723564  838391 ssh_runner.go:195] Run: systemctl --version
	I0917 00:13:27.805571  838391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:13:27.810704  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:13:27.829982  838391 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:13:27.830056  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:13:27.839307  838391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:13:27.839334  838391 start.go:495] detecting cgroup driver to use...
	I0917 00:13:27.839374  838391 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:13:27.839455  838391 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:13:27.853620  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:13:27.866086  838391 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:13:27.866143  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:13:27.879568  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:13:27.891699  838391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:13:27.957039  838391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:13:28.019649  838391 docker.go:234] disabling docker service ...
	I0917 00:13:28.019719  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:13:28.032725  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:13:28.045044  838391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:13:28.110090  838391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:13:28.176290  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:13:28.188485  838391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:13:28.206191  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:13:28.216912  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:13:28.227586  838391 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:13:28.227653  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:13:28.238198  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:28.248607  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:13:28.258883  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:28.269300  838391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:13:28.279692  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:13:28.290638  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:13:28.301524  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:13:28.312695  838391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:13:28.321821  838391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:13:28.331494  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:28.395408  838391 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:13:28.510345  838391 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:13:28.510442  838391 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:13:28.514486  838391 start.go:563] Will wait 60s for crictl version
	I0917 00:13:28.514543  838391 ssh_runner.go:195] Run: which crictl
	I0917 00:13:28.518058  838391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:13:28.553392  838391 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:13:28.553470  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:28.578186  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:28.607037  838391 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:13:28.608343  838391 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:13:28.625981  838391 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:13:28.630074  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:28.642270  838391 kubeadm.go:875] updating cluster {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:13:28.642447  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:28.642500  838391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:13:28.677502  838391 containerd.go:627] all images are preloaded for containerd runtime.
	I0917 00:13:28.677528  838391 containerd.go:534] Images already preloaded, skipping extraction
	I0917 00:13:28.677596  838391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:13:28.711767  838391 containerd.go:627] all images are preloaded for containerd runtime.
	I0917 00:13:28.711790  838391 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:13:28.711799  838391 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0917 00:13:28.711898  838391 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:13:28.711952  838391 ssh_runner.go:195] Run: sudo crictl info
	I0917 00:13:28.748238  838391 cni.go:84] Creating CNI manager for ""
	I0917 00:13:28.748269  838391 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:13:28.748282  838391 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:13:28.748301  838391 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-472903 NodeName:ha-472903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:13:28.748434  838391 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-472903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:13:28.748456  838391 kube-vip.go:115] generating kube-vip config ...
	I0917 00:13:28.748504  838391 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:13:28.761835  838391 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:13:28.761950  838391 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:13:28.762005  838391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:13:28.771377  838391 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:13:28.771466  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:13:28.780815  838391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0917 00:13:28.799673  838391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:13:28.818695  838391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0917 00:13:28.837443  838391 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:13:28.856629  838391 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:13:28.860342  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:28.871978  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:28.937920  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:28.965162  838391 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.2
	I0917 00:13:28.965183  838391 certs.go:194] generating shared ca certs ...
	I0917 00:13:28.965200  838391 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:28.965352  838391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:13:28.965429  838391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:13:28.965446  838391 certs.go:256] generating profile certs ...
	I0917 00:13:28.965567  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0917 00:13:28.965609  838391 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.8acf531c
	I0917 00:13:28.965631  838391 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.8acf531c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0917 00:13:28.981661  838391 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.8acf531c ...
	I0917 00:13:28.981698  838391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.8acf531c: {Name:mkdef0e1cbf73e7227a698510b51d68a698391c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:28.981868  838391 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.8acf531c ...
	I0917 00:13:28.981880  838391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.8acf531c: {Name:mk80b61f5fe8d635199050a211c5a719c4b8f9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:28.981959  838391 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.8acf531c -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0917 00:13:28.982123  838391 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.8acf531c -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0917 00:13:28.982267  838391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0917 00:13:28.982283  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:13:28.982296  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:13:28.982309  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:13:28.982327  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:13:28.982340  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:13:28.982352  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:13:28.982367  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:13:28.982379  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:13:28.982446  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:13:28.982481  838391 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:13:28.982491  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:13:28.982517  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:13:28.982539  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:13:28.982559  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:13:28.982598  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:28.982624  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0917 00:13:28.982638  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0917 00:13:28.982650  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:28.983259  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:13:29.011855  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:13:29.044116  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:13:29.076632  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:13:29.102081  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:13:29.127618  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:13:29.154054  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:13:29.181152  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:13:29.207152  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:13:29.234803  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:13:29.261065  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:13:29.285817  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:13:29.304802  838391 ssh_runner.go:195] Run: openssl version
	I0917 00:13:29.310548  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:13:29.321280  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:13:29.325168  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:13:29.325220  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:13:29.332550  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:13:29.342450  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:13:29.352677  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:29.356484  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:29.356557  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:29.363671  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:13:29.373502  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:13:29.383350  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:13:29.386969  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:13:29.387020  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:13:29.393845  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:13:29.402996  838391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:13:29.406679  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:13:29.413276  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:13:29.420039  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:13:29.426813  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:13:29.433710  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:13:29.440812  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:13:29.447756  838391 kubeadm.go:392] StartCluster: {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:13:29.447896  838391 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0917 00:13:29.447983  838391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:13:29.484343  838391 cri.go:89] found id: "5a5a17cca6c0a643b6c0881dab5508dcb7de8e6ad77d7e6ecb81d434ab2cc8a1"
	I0917 00:13:29.484364  838391 cri.go:89] found id: "8683544e2a9d579448e28b8f33653e2c8d1315b2d07bd7b4ce574428d93c6f3a"
	I0917 00:13:29.484368  838391 cri.go:89] found id: "9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315"
	I0917 00:13:29.484373  838391 cri.go:89] found id: "3b457407f10e357ce33da7fa3fb4333f8312f0d3e3570cf8528cdcac8f5a1d0f"
	I0917 00:13:29.484376  838391 cri.go:89] found id: "cc69d2451cb65860b5bc78e027be2fc1cb0f9fa6542b4abe3bc1ff1c90a8fe60"
	I0917 00:13:29.484379  838391 cri.go:89] found id: "92dd4d116eb0387dded82fb32d35690ec2d00e3f5e7ac81bf7aea0c6814edd5e"
	I0917 00:13:29.484382  838391 cri.go:89] found id: "bba28cace6502de93aa43db4fb51671581c5074990dea721d98d36d839734a67"
	I0917 00:13:29.484384  838391 cri.go:89] found id: "087290a41f59caa4f9bc89759bcec6cf90f47c8a2ab83b7c671a8fff35641df9"
	I0917 00:13:29.484387  838391 cri.go:89] found id: "0aba62132d764965d8e1a80a4a6345bb7e34892b23143da4a7af3450cd465d6c"
	I0917 00:13:29.484395  838391 cri.go:89] found id: "23c0af0bdbe9526d53769461ed9f80d8c743b02e625b65cce39c888f5e7d4b4e"
	I0917 00:13:29.484398  838391 cri.go:89] found id: ""
	I0917 00:13:29.484470  838391 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0917 00:13:29.498073  838391 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T00:13:29Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0917 00:13:29.498177  838391 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:13:29.508791  838391 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:13:29.508813  838391 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:13:29.508861  838391 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:13:29.519962  838391 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:13:29.520528  838391 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-472903" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:13:29.520700  838391 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-749120/kubeconfig needs updating (will repair): [kubeconfig missing "ha-472903" cluster setting kubeconfig missing "ha-472903" context setting]
	I0917 00:13:29.521229  838391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:29.521963  838391 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:13:29.522552  838391 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:13:29.522579  838391 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:13:29.522586  838391 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:13:29.522592  838391 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:13:29.522598  838391 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:13:29.522631  838391 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:13:29.523130  838391 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:13:29.536212  838391 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:13:29.536248  838391 kubeadm.go:593] duration metric: took 27.419363ms to restartPrimaryControlPlane
	I0917 00:13:29.536260  838391 kubeadm.go:394] duration metric: took 88.513961ms to StartCluster
	I0917 00:13:29.536281  838391 settings.go:142] acquiring lock: {Name:mk6c1a5bee23e141aad5180323c16c47ed580ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:29.536352  838391 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:13:29.537180  838391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:29.537465  838391 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:13:29.537498  838391 start.go:241] waiting for startup goroutines ...
	I0917 00:13:29.537509  838391 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:13:29.537779  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:29.539896  838391 out.go:179] * Enabled addons: 
	I0917 00:13:29.541345  838391 addons.go:514] duration metric: took 3.828487ms for enable addons: enabled=[]
	I0917 00:13:29.541404  838391 start.go:246] waiting for cluster config update ...
	I0917 00:13:29.541459  838391 start.go:255] writing updated cluster config ...
	I0917 00:13:29.543184  838391 out.go:203] 
	I0917 00:13:29.548360  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:29.548520  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:29.550284  838391 out.go:179] * Starting "ha-472903-m02" control-plane node in "ha-472903" cluster
	I0917 00:13:29.551514  838391 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:13:29.552445  838391 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:13:29.554184  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:29.554221  838391 cache.go:58] Caching tarball of preloaded images
	I0917 00:13:29.554326  838391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:13:29.554361  838391 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:13:29.554376  838391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:13:29.554541  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:29.581238  838391 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:13:29.581265  838391 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:13:29.581286  838391 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:13:29.581322  838391 start.go:360] acquireMachinesLock for ha-472903-m02: {Name:mk81d8c73856cf84ceff1767a1681f3f3cdab773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:13:29.581402  838391 start.go:364] duration metric: took 53.081µs to acquireMachinesLock for "ha-472903-m02"
	I0917 00:13:29.581447  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:13:29.581461  838391 fix.go:54] fixHost starting: m02
	I0917 00:13:29.581795  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:13:29.604878  838391 fix.go:112] recreateIfNeeded on ha-472903-m02: state=Stopped err=<nil>
	W0917 00:13:29.604915  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:13:29.607517  838391 out.go:252] * Restarting existing docker container for "ha-472903-m02" ...
	I0917 00:13:29.607600  838391 cli_runner.go:164] Run: docker start ha-472903-m02
	I0917 00:13:29.911119  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:13:29.930731  838391 kic.go:430] container "ha-472903-m02" state is running.
	I0917 00:13:29.931116  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:13:29.951026  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:29.951305  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:13:29.951370  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:29.974010  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:29.974330  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I0917 00:13:29.974348  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:13:29.975092  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37012->127.0.0.1:33579: read: connection reset by peer
	I0917 00:13:33.111351  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0917 00:13:33.111379  838391 ubuntu.go:182] provisioning hostname "ha-472903-m02"
	I0917 00:13:33.111466  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:33.129914  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:33.130125  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I0917 00:13:33.130138  838391 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m02 && echo "ha-472903-m02" | sudo tee /etc/hostname
	I0917 00:13:33.276390  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0917 00:13:33.276473  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:33.295322  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:33.295578  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I0917 00:13:33.295626  838391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:13:33.430221  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:13:33.430255  838391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:13:33.430276  838391 ubuntu.go:190] setting up certificates
	I0917 00:13:33.430293  838391 provision.go:84] configureAuth start
	I0917 00:13:33.430347  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:13:33.447859  838391 provision.go:143] copyHostCerts
	I0917 00:13:33.447896  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:33.447924  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:13:33.447931  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:33.447997  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:13:33.448082  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:33.448101  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:13:33.448105  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:33.448129  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:13:33.448171  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:33.448188  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:13:33.448194  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:33.448221  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:13:33.448284  838391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m02 san=[127.0.0.1 192.168.49.3 ha-472903-m02 localhost minikube]
	I0917 00:13:33.772202  838391 provision.go:177] copyRemoteCerts
	I0917 00:13:33.772271  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:13:33.772308  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:33.790580  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:33.888743  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:13:33.888811  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:13:33.915641  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:13:33.915714  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:13:33.947505  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:13:33.947576  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:13:33.982626  838391 provision.go:87] duration metric: took 552.315533ms to configureAuth
	I0917 00:13:33.982666  838391 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:13:33.983009  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:33.983035  838391 machine.go:96] duration metric: took 4.031716501s to provisionDockerMachine
	I0917 00:13:33.983048  838391 start.go:293] postStartSetup for "ha-472903-m02" (driver="docker")
	I0917 00:13:33.983079  838391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:13:33.983149  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:13:33.983189  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:34.006390  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:34.114836  838391 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:13:34.122569  838391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:13:34.122609  838391 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:13:34.122622  838391 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:13:34.122631  838391 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:13:34.122648  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:13:34.122715  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:13:34.122819  838391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:13:34.122842  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0917 00:13:34.122963  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:13:34.133119  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:34.163792  838391 start.go:296] duration metric: took 180.726136ms for postStartSetup
	I0917 00:13:34.163881  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:13:34.163931  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:34.187017  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:34.289000  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:13:34.295122  838391 fix.go:56] duration metric: took 4.713651457s for fixHost
	I0917 00:13:34.295149  838391 start.go:83] releasing machines lock for "ha-472903-m02", held for 4.713713361s
	I0917 00:13:34.295238  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:13:34.323055  838391 out.go:179] * Found network options:
	I0917 00:13:34.324886  838391 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:13:34.326740  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:13:34.326797  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:13:34.326881  838391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:13:34.326949  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:34.327068  838391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:13:34.327142  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:34.349495  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:34.351023  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:34.450454  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:13:34.547618  838391 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:13:34.547706  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:13:34.558822  838391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:13:34.558854  838391 start.go:495] detecting cgroup driver to use...
	I0917 00:13:34.558889  838391 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:13:34.558939  838391 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:13:34.584135  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:13:34.599048  838391 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:13:34.599118  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:13:34.615043  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:13:34.627813  838391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:13:34.751575  838391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:13:34.913336  838391 docker.go:234] disabling docker service ...
	I0917 00:13:34.913429  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:13:34.943843  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:13:34.964995  838391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:13:35.154858  838391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:13:35.276803  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:13:35.292337  838391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:13:35.312501  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:13:35.325061  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:13:35.337094  838391 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:13:35.337162  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:13:35.349635  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:35.361644  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:13:35.373144  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:35.385968  838391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:13:35.397684  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:13:35.409662  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:13:35.422089  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:13:35.433950  838391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:13:35.445355  838391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:13:35.456096  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:35.554404  838391 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:13:35.775103  838391 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:13:35.775175  838391 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:13:35.780034  838391 start.go:563] Will wait 60s for crictl version
	I0917 00:13:35.780106  838391 ssh_runner.go:195] Run: which crictl
	I0917 00:13:35.784109  838391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:13:35.826151  838391 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:13:35.826224  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:35.852960  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:35.877876  838391 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:13:35.879103  838391 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:13:35.880100  838391 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:13:35.897195  838391 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:13:35.901082  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:35.912748  838391 mustload.go:65] Loading cluster: ha-472903
	I0917 00:13:35.912967  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:35.913168  838391 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:13:35.931969  838391 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:13:35.932217  838391 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.3
	I0917 00:13:35.932230  838391 certs.go:194] generating shared ca certs ...
	I0917 00:13:35.932244  838391 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:35.932358  838391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:13:35.932394  838391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:13:35.932404  838391 certs.go:256] generating profile certs ...
	I0917 00:13:35.932495  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0917 00:13:35.932546  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.b92722b6
	I0917 00:13:35.932585  838391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0917 00:13:35.932596  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:13:35.932607  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:13:35.932619  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:13:35.932630  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:13:35.932643  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:13:35.932656  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:13:35.932668  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:13:35.932681  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:13:35.932726  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:13:35.932752  838391 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:13:35.932761  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:13:35.932781  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:13:35.932801  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:13:35.932822  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:13:35.932861  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:35.932888  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0917 00:13:35.932902  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0917 00:13:35.932914  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:35.932957  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:35.950361  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:36.038689  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:13:36.046320  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:13:36.065517  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:13:36.070746  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:13:36.088267  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:13:36.093060  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:13:36.109798  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:13:36.114630  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:13:36.132250  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:13:36.137979  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:13:36.158118  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:13:36.163359  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 00:13:36.183892  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:13:36.221052  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:13:36.260302  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:13:36.294497  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:13:36.328388  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:13:36.364809  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:13:36.406406  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:13:36.458823  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:13:36.524795  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:13:36.572655  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:13:36.619864  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:13:36.672387  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:13:36.709674  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:13:36.746751  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:13:36.783161  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:13:36.813099  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:13:36.837070  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 00:13:36.858764  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:13:36.877818  838391 ssh_runner.go:195] Run: openssl version
	I0917 00:13:36.883443  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:13:36.894826  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:13:36.899068  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:13:36.899146  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:13:36.907246  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:13:36.916910  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:13:36.927032  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:36.930914  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:36.930968  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:36.940300  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:13:36.953573  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:13:36.967306  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:13:36.971796  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:13:36.971852  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:13:36.981091  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:13:36.991490  838391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:13:36.995167  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:13:37.003067  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:13:37.009863  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:13:37.016575  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:13:37.023485  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:13:37.032694  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:13:37.042763  838391 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0917 00:13:37.042877  838391 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:13:37.042911  838391 kube-vip.go:115] generating kube-vip config ...
	I0917 00:13:37.042948  838391 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:13:37.060530  838391 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:13:37.060601  838391 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:13:37.060658  838391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:13:37.072293  838391 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:13:37.072371  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:13:37.084220  838391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 00:13:37.109777  838391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:13:37.137135  838391 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:13:37.165385  838391 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:13:37.170106  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:37.186447  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:37.337215  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:37.351480  838391 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:13:37.351795  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:37.353499  838391 out.go:179] * Verifying Kubernetes components...
	I0917 00:13:37.354663  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:37.476140  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:37.492755  838391 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:13:37.492840  838391 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:13:37.493129  838391 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m02" to be "Ready" ...
	I0917 00:13:37.501768  838391 node_ready.go:49] node "ha-472903-m02" is "Ready"
	I0917 00:13:37.501795  838391 node_ready.go:38] duration metric: took 8.646756ms for node "ha-472903-m02" to be "Ready" ...
	I0917 00:13:37.501810  838391 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:13:37.501850  838391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:13:37.513878  838391 api_server.go:72] duration metric: took 162.352734ms to wait for apiserver process to appear ...
	I0917 00:13:37.513902  838391 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:13:37.513995  838391 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:13:37.519494  838391 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:13:37.520502  838391 api_server.go:141] control plane version: v1.34.0
	I0917 00:13:37.520525  838391 api_server.go:131] duration metric: took 6.615829ms to wait for apiserver health ...
	I0917 00:13:37.520533  838391 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:13:37.529003  838391 system_pods.go:59] 24 kube-system pods found
	I0917 00:13:37.529040  838391 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:37.529049  838391 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:37.529058  838391 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:37.529064  838391 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:37.529068  838391 system_pods.go:61] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running
	I0917 00:13:37.529072  838391 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:13:37.529075  838391 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0917 00:13:37.529078  838391 system_pods.go:61] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:13:37.529083  838391 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:37.529092  838391 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:37.529096  838391 system_pods.go:61] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running
	I0917 00:13:37.529102  838391 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:37.529110  838391 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:37.529113  838391 system_pods.go:61] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running
	I0917 00:13:37.529118  838391 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0917 00:13:37.529121  838391 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:13:37.529125  838391 system_pods.go:61] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:13:37.529131  838391 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:37.529136  838391 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:37.529144  838391 system_pods.go:61] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running
	I0917 00:13:37.529147  838391 system_pods.go:61] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:13:37.529150  838391 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:13:37.529153  838391 system_pods.go:61] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:13:37.529156  838391 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:13:37.529161  838391 system_pods.go:74] duration metric: took 8.623694ms to wait for pod list to return data ...
	I0917 00:13:37.529167  838391 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:13:37.531877  838391 default_sa.go:45] found service account: "default"
	I0917 00:13:37.531901  838391 default_sa.go:55] duration metric: took 2.728819ms for default service account to be created ...
	I0917 00:13:37.531910  838391 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:13:37.538254  838391 system_pods.go:86] 24 kube-system pods found
	I0917 00:13:37.538287  838391 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:37.538298  838391 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:37.538308  838391 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:37.538315  838391 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:37.538321  838391 system_pods.go:89] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running
	I0917 00:13:37.538327  838391 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:13:37.538333  838391 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0917 00:13:37.538340  838391 system_pods.go:89] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:13:37.538353  838391 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:37.538366  838391 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:37.538373  838391 system_pods.go:89] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running
	I0917 00:13:37.538383  838391 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:37.538396  838391 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:37.538406  838391 system_pods.go:89] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running
	I0917 00:13:37.538447  838391 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0917 00:13:37.538457  838391 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:13:37.538465  838391 system_pods.go:89] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:13:37.538479  838391 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:37.538492  838391 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:37.538504  838391 system_pods.go:89] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running
	I0917 00:13:37.538511  838391 system_pods.go:89] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:13:37.538517  838391 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:13:37.538523  838391 system_pods.go:89] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:13:37.538528  838391 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:13:37.538538  838391 system_pods.go:126] duration metric: took 6.620318ms to wait for k8s-apps to be running ...
	I0917 00:13:37.538550  838391 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:13:37.538595  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:13:37.551380  838391 system_svc.go:56] duration metric: took 12.817524ms WaitForService to wait for kubelet
	I0917 00:13:37.551421  838391 kubeadm.go:578] duration metric: took 199.889741ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:13:37.551446  838391 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:13:37.554601  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:37.554630  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:37.554646  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:37.554651  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:37.554657  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:37.554661  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:37.554667  838391 node_conditions.go:105] duration metric: took 3.21568ms to run NodePressure ...
	I0917 00:13:37.554682  838391 start.go:241] waiting for startup goroutines ...
	I0917 00:13:37.554713  838391 start.go:255] writing updated cluster config ...
	I0917 00:13:37.556785  838391 out.go:203] 
	I0917 00:13:37.558118  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:37.558205  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:37.560287  838391 out.go:179] * Starting "ha-472903-m03" control-plane node in "ha-472903" cluster
	I0917 00:13:37.561674  838391 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:13:37.562756  838391 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:13:37.563720  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:37.563746  838391 cache.go:58] Caching tarball of preloaded images
	I0917 00:13:37.563814  838391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:13:37.563852  838391 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:13:37.563866  838391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:13:37.563958  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:37.584605  838391 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:13:37.584624  838391 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:13:37.584638  838391 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:13:37.584670  838391 start.go:360] acquireMachinesLock for ha-472903-m03: {Name:mk61000bb8e4699ca3310a7fc257e30a156b69de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:13:37.584735  838391 start.go:364] duration metric: took 44.453µs to acquireMachinesLock for "ha-472903-m03"
	I0917 00:13:37.584761  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:13:37.584768  838391 fix.go:54] fixHost starting: m03
	I0917 00:13:37.585018  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:13:37.604118  838391 fix.go:112] recreateIfNeeded on ha-472903-m03: state=Stopped err=<nil>
	W0917 00:13:37.604141  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:13:37.606555  838391 out.go:252] * Restarting existing docker container for "ha-472903-m03" ...
	I0917 00:13:37.606618  838391 cli_runner.go:164] Run: docker start ha-472903-m03
	I0917 00:13:37.854742  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:13:37.873167  838391 kic.go:430] container "ha-472903-m03" state is running.
	I0917 00:13:37.873554  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:13:37.894030  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:37.894294  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:13:37.894371  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:37.912571  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:37.912785  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33584 <nil> <nil>}
	I0917 00:13:37.912796  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:13:37.913480  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50250->127.0.0.1:33584: read: connection reset by peer
	I0917 00:13:41.078339  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0917 00:13:41.078371  838391 ubuntu.go:182] provisioning hostname "ha-472903-m03"
	I0917 00:13:41.078468  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:41.099623  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:41.099906  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33584 <nil> <nil>}
	I0917 00:13:41.099929  838391 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m03 && echo "ha-472903-m03" | sudo tee /etc/hostname
	I0917 00:13:41.256611  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0917 00:13:41.256681  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:41.275951  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:41.276266  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33584 <nil> <nil>}
	I0917 00:13:41.276291  838391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:13:41.413177  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:13:41.413213  838391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:13:41.413235  838391 ubuntu.go:190] setting up certificates
	I0917 00:13:41.413252  838391 provision.go:84] configureAuth start
	I0917 00:13:41.413326  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:13:41.432242  838391 provision.go:143] copyHostCerts
	I0917 00:13:41.432284  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:41.432323  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:13:41.432334  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:41.432427  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:13:41.432522  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:41.432547  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:13:41.432556  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:41.432591  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:13:41.432652  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:41.432676  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:13:41.432684  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:41.432717  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:13:41.432785  838391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m03 san=[127.0.0.1 192.168.49.4 ha-472903-m03 localhost minikube]
	I0917 00:13:41.862573  838391 provision.go:177] copyRemoteCerts
	I0917 00:13:41.862629  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:13:41.862665  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:41.885400  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:41.994335  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:13:41.994423  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:13:42.028538  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:13:42.028607  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:13:42.067649  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:13:42.067726  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:13:42.099602  838391 provision.go:87] duration metric: took 686.33067ms to configureAuth
	I0917 00:13:42.099636  838391 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:13:42.099920  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:42.099938  838391 machine.go:96] duration metric: took 4.205627363s to provisionDockerMachine
	I0917 00:13:42.099950  838391 start.go:293] postStartSetup for "ha-472903-m03" (driver="docker")
	I0917 00:13:42.099962  838391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:13:42.100117  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:13:42.100183  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:42.122141  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:42.233836  838391 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:13:42.238854  838391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:13:42.238889  838391 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:13:42.238900  838391 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:13:42.238908  838391 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:13:42.238924  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:13:42.238985  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:13:42.239080  838391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:13:42.239088  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0917 00:13:42.239207  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:13:42.256636  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:42.284884  838391 start.go:296] duration metric: took 184.914637ms for postStartSetup
	I0917 00:13:42.284980  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:13:42.285038  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:42.306309  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:42.403953  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:13:42.409407  838391 fix.go:56] duration metric: took 4.824632112s for fixHost
	I0917 00:13:42.409462  838391 start.go:83] releasing machines lock for "ha-472903-m03", held for 4.824710137s
	I0917 00:13:42.409541  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:13:42.432198  838391 out.go:179] * Found network options:
	I0917 00:13:42.433393  838391 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0917 00:13:42.434713  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:13:42.434749  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:13:42.434778  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:13:42.434796  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:13:42.434873  838391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:13:42.434927  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:42.434964  838391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:13:42.435037  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:42.456445  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:42.457637  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:42.649452  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:13:42.669255  838391 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:13:42.669336  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:13:42.678466  838391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:13:42.678490  838391 start.go:495] detecting cgroup driver to use...
	I0917 00:13:42.678537  838391 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:13:42.678593  838391 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:13:42.694034  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:13:42.706095  838391 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:13:42.706148  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:13:42.720214  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:13:42.731568  838391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:13:42.844067  838391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:13:42.990517  838391 docker.go:234] disabling docker service ...
	I0917 00:13:42.990597  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:13:43.009784  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:13:43.025954  838391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:13:43.175561  838391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:13:43.288802  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:13:43.302127  838391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:13:43.320551  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:13:43.330880  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:13:43.341008  838391 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:13:43.341063  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:13:43.351160  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:43.361609  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:13:43.371882  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:43.382351  838391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:13:43.391804  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:13:43.401909  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:13:43.413802  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:13:43.424357  838391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:13:43.433387  838391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:13:43.442035  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:43.556953  838391 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:13:43.771383  838391 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:13:43.771487  838391 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:13:43.776031  838391 start.go:563] Will wait 60s for crictl version
	I0917 00:13:43.776089  838391 ssh_runner.go:195] Run: which crictl
	I0917 00:13:43.779581  838391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:13:43.819843  838391 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:13:43.819918  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:43.856395  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:43.887208  838391 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:13:43.888621  838391 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:13:43.889813  838391 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0917 00:13:43.890984  838391 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:13:43.910830  838391 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:13:43.915764  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:43.928519  838391 mustload.go:65] Loading cluster: ha-472903
	I0917 00:13:43.928713  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:43.928903  838391 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:13:43.947488  838391 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:13:43.947756  838391 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.4
	I0917 00:13:43.947768  838391 certs.go:194] generating shared ca certs ...
	I0917 00:13:43.947788  838391 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:43.947924  838391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:13:43.947984  838391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:13:43.947997  838391 certs.go:256] generating profile certs ...
	I0917 00:13:43.948089  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0917 00:13:43.948160  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8
	I0917 00:13:43.948220  838391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0917 00:13:43.948236  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:13:43.948257  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:13:43.948274  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:13:43.948291  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:13:43.948305  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:13:43.948322  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:13:43.948341  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:13:43.948359  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:13:43.948448  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:13:43.948497  838391 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:13:43.948514  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:13:43.948542  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:13:43.948574  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:13:43.948605  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:13:43.948679  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:43.948730  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:43.948750  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0917 00:13:43.948766  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0917 00:13:43.948828  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:43.966378  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:44.054709  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:13:44.058781  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:13:44.071805  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:13:44.075707  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:13:44.088751  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:13:44.092347  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:13:44.104909  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:13:44.108527  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:13:44.121249  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:13:44.124730  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:13:44.137128  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:13:44.140545  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 00:13:44.153313  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:13:44.178995  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:13:44.203321  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:13:44.228724  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:13:44.253672  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:13:44.277964  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:13:44.302441  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:13:44.326350  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:13:44.351539  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:13:44.376666  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:13:44.404677  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:13:44.431366  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:13:44.450278  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:13:44.468513  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:13:44.486743  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:13:44.504987  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:13:44.524143  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 00:13:44.542282  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:13:44.563055  838391 ssh_runner.go:195] Run: openssl version
	I0917 00:13:44.569331  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:13:44.580250  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:44.584080  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:44.584138  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:44.591070  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:13:44.600282  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:13:44.610104  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:13:44.613726  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:13:44.613768  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:13:44.620611  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:13:44.629788  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:13:44.639483  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:13:44.643062  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:13:44.643110  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:13:44.650489  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:13:44.659935  838391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:13:44.663514  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:13:44.669906  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:13:44.676511  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:13:44.682889  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:13:44.689353  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:13:44.695631  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:13:44.702340  838391 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0917 00:13:44.702470  838391 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:13:44.702498  838391 kube-vip.go:115] generating kube-vip config ...
	I0917 00:13:44.702533  838391 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:13:44.715980  838391 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:13:44.716039  838391 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:13:44.716091  838391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:13:44.725480  838391 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:13:44.725529  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:13:44.734323  838391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 00:13:44.753458  838391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:13:44.773199  838391 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:13:44.791551  838391 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:13:44.795163  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:44.806641  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:44.919558  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:44.932561  838391 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:13:44.932786  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:44.934564  838391 out.go:179] * Verifying Kubernetes components...
	I0917 00:13:44.935745  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:45.049795  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:45.064166  838391 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:13:45.064235  838391 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:13:45.064458  838391 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m03" to be "Ready" ...
	I0917 00:13:45.067494  838391 node_ready.go:49] node "ha-472903-m03" is "Ready"
	I0917 00:13:45.067523  838391 node_ready.go:38] duration metric: took 3.046711ms for node "ha-472903-m03" to be "Ready" ...
	I0917 00:13:45.067540  838391 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:13:45.067600  838391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:13:45.078867  838391 api_server.go:72] duration metric: took 146.25055ms to wait for apiserver process to appear ...
	I0917 00:13:45.078891  838391 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:13:45.078908  838391 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:13:45.084241  838391 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:13:45.085084  838391 api_server.go:141] control plane version: v1.34.0
	I0917 00:13:45.085104  838391 api_server.go:131] duration metric: took 6.207355ms to wait for apiserver health ...
	I0917 00:13:45.085112  838391 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:13:45.090968  838391 system_pods.go:59] 24 kube-system pods found
	I0917 00:13:45.091001  838391 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:45.091023  838391 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:45.091035  838391 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0917 00:13:45.091045  838391 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0917 00:13:45.091053  838391 system_pods.go:61] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:45.091060  838391 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:13:45.091064  838391 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0917 00:13:45.091070  838391 system_pods.go:61] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:13:45.091076  838391 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.091088  838391 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.091100  838391 system_pods.go:61] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.091109  838391 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0917 00:13:45.091115  838391 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0917 00:13:45.091127  838391 system_pods.go:61] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:45.091135  838391 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0917 00:13:45.091141  838391 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:13:45.091152  838391 system_pods.go:61] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:13:45.091159  838391 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:45.091164  838391 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0917 00:13:45.091177  838391 system_pods.go:61] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:45.091187  838391 system_pods.go:61] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:13:45.091196  838391 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:13:45.091200  838391 system_pods.go:61] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:13:45.091208  838391 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:13:45.091216  838391 system_pods.go:74] duration metric: took 6.096009ms to wait for pod list to return data ...
	I0917 00:13:45.091227  838391 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:13:45.093796  838391 default_sa.go:45] found service account: "default"
	I0917 00:13:45.093813  838391 default_sa.go:55] duration metric: took 2.577656ms for default service account to be created ...
	I0917 00:13:45.093820  838391 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:13:45.099455  838391 system_pods.go:86] 24 kube-system pods found
	I0917 00:13:45.099490  838391 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:45.099501  838391 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:45.099507  838391 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0917 00:13:45.099511  838391 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0917 00:13:45.099518  838391 system_pods.go:89] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:45.099540  838391 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:13:45.099551  838391 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0917 00:13:45.099556  838391 system_pods.go:89] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:13:45.099563  838391 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.099578  838391 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.099589  838391 system_pods.go:89] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.099596  838391 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0917 00:13:45.099601  838391 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0917 00:13:45.099614  838391 system_pods.go:89] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:45.099624  838391 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0917 00:13:45.099632  838391 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:13:45.099639  838391 system_pods.go:89] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:13:45.099649  838391 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:45.099657  838391 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0917 00:13:45.099665  838391 system_pods.go:89] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:45.099678  838391 system_pods.go:89] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:13:45.099682  838391 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:13:45.099688  838391 system_pods.go:89] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:13:45.099693  838391 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:13:45.099701  838391 system_pods.go:126] duration metric: took 5.874708ms to wait for k8s-apps to be running ...
	I0917 00:13:45.099714  838391 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:13:45.099765  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:13:45.111785  838391 system_svc.go:56] duration metric: took 12.061761ms WaitForService to wait for kubelet
	I0917 00:13:45.111811  838391 kubeadm.go:578] duration metric: took 179.201567ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:13:45.111829  838391 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:13:45.115075  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:45.115095  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:45.115109  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:45.115114  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:45.115118  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:45.115124  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:45.115130  838391 node_conditions.go:105] duration metric: took 3.295987ms to run NodePressure ...
	I0917 00:13:45.115145  838391 start.go:241] waiting for startup goroutines ...
	I0917 00:13:45.115177  838391 start.go:255] writing updated cluster config ...
	I0917 00:13:45.116870  838391 out.go:203] 
	I0917 00:13:45.117967  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:45.118090  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:45.119494  838391 out.go:179] * Starting "ha-472903-m04" worker node in "ha-472903" cluster
	I0917 00:13:45.120460  838391 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:13:45.121518  838391 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:13:45.122495  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:45.122511  838391 cache.go:58] Caching tarball of preloaded images
	I0917 00:13:45.122563  838391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:13:45.122595  838391 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:13:45.122603  838391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:13:45.122694  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:45.143478  838391 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:13:45.143500  838391 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:13:45.143517  838391 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:13:45.143550  838391 start.go:360] acquireMachinesLock for ha-472903-m04: {Name:mkdbbd0d5b3cd7ad4b13d37f2d562d6d6421c5de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:13:45.143618  838391 start.go:364] duration metric: took 45.935µs to acquireMachinesLock for "ha-472903-m04"
	I0917 00:13:45.143643  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:13:45.143650  838391 fix.go:54] fixHost starting: m04
	I0917 00:13:45.143945  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:13:45.161874  838391 fix.go:112] recreateIfNeeded on ha-472903-m04: state=Stopped err=<nil>
	W0917 00:13:45.161907  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:13:45.163684  838391 out.go:252] * Restarting existing docker container for "ha-472903-m04" ...
	I0917 00:13:45.163768  838391 cli_runner.go:164] Run: docker start ha-472903-m04
	I0917 00:13:45.414854  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:13:45.433545  838391 kic.go:430] container "ha-472903-m04" state is running.
	I0917 00:13:45.433944  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m04
	I0917 00:13:45.452344  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:45.452626  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:13:45.452705  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	I0917 00:13:45.471203  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:45.471486  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33589 <nil> <nil>}
	I0917 00:13:45.471509  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:13:45.472182  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55516->127.0.0.1:33589: read: connection reset by peer
	I0917 00:13:48.473360  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:13:51.474441  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:13:54.475694  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:13:57.476729  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:00.477687  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:03.477978  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:06.479736  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:09.480885  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:12.482720  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:15.483800  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:18.484741  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:21.485809  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:24.487156  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:27.488676  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:30.489805  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:33.490276  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:36.491714  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:39.492658  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:42.493967  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:45.494632  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:48.495764  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:51.496767  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:54.497734  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:57.499659  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:00.500675  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:03.501862  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:06.503834  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:09.505079  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:12.507641  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:15.508761  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:18.509736  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:21.510672  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:24.512280  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:27.514552  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:30.515709  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:33.516144  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:36.518405  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:39.519733  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:42.521625  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:45.522451  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:48.523249  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:51.524945  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:54.525931  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:57.527643  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:00.528649  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:03.529267  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:06.531578  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:09.532530  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:12.534632  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:15.537051  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:18.537304  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:21.538664  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:24.539680  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:27.541681  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:30.542852  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:33.543744  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:36.544245  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:39.544518  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:42.546746  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:45.548509  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:16:45.548571  838391 ubuntu.go:182] provisioning hostname "ha-472903-m04"
	I0917 00:16:45.548664  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:45.567482  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:45.567574  838391 machine.go:96] duration metric: took 3m0.114930329s to provisionDockerMachine
	I0917 00:16:45.567666  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:16:45.567704  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:45.586204  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:45.586381  838391 retry.go:31] will retry after 243.120334ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:45.829742  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:45.848018  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:45.848165  838391 retry.go:31] will retry after 204.404017ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:46.053620  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:46.071508  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:46.071648  838391 retry.go:31] will retry after 637.92377ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:46.710530  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:46.728463  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:16:46.728598  838391 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:16:46.728620  838391 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:46.728676  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:16:46.728722  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:46.746202  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:46.746328  838391 retry.go:31] will retry after 328.494131ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:47.075622  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:47.094084  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:47.094205  838391 retry.go:31] will retry after 397.703456ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:47.492843  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:47.511608  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:47.511709  838391 retry.go:31] will retry after 759.296258ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:48.271608  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:48.289666  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:16:48.289812  838391 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:16:48.289830  838391 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:48.289844  838391 fix.go:56] duration metric: took 3m3.146193546s for fixHost
	I0917 00:16:48.289858  838391 start.go:83] releasing machines lock for "ha-472903-m04", held for 3m3.146226948s
	W0917 00:16:48.289881  838391 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:16:48.289975  838391 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:48.289987  838391 start.go:729] Will try again in 5 seconds ...
	I0917 00:16:53.290141  838391 start.go:360] acquireMachinesLock for ha-472903-m04: {Name:mkdbbd0d5b3cd7ad4b13d37f2d562d6d6421c5de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:16:53.290272  838391 start.go:364] duration metric: took 94.983µs to acquireMachinesLock for "ha-472903-m04"
	I0917 00:16:53.290297  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:16:53.290303  838391 fix.go:54] fixHost starting: m04
	I0917 00:16:53.290646  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:16:53.309611  838391 fix.go:112] recreateIfNeeded on ha-472903-m04: state=Stopped err=<nil>
	W0917 00:16:53.309640  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:16:53.311233  838391 out.go:252] * Restarting existing docker container for "ha-472903-m04" ...
	I0917 00:16:53.311300  838391 cli_runner.go:164] Run: docker start ha-472903-m04
	I0917 00:16:53.541222  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:16:53.560095  838391 kic.go:430] container "ha-472903-m04" state is running.
	I0917 00:16:53.560573  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m04
	I0917 00:16:53.580208  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:16:53.580538  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:16:53.580642  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	I0917 00:16:53.599573  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:16:53.599853  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33594 <nil> <nil>}
	I0917 00:16:53.599867  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:16:53.600481  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36824->127.0.0.1:33594: read: connection reset by peer
	I0917 00:16:56.602700  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:16:59.603638  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:02.605644  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:05.607721  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:08.608037  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:11.609632  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:14.610658  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:17.612855  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:20.613697  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:23.614397  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:26.616706  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:29.617175  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:32.618651  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:35.620635  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:38.621502  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:41.622948  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:44.624290  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:47.624933  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:50.625690  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:53.626092  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:56.628195  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:59.629019  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:02.631303  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:05.632822  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:08.633316  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:11.635679  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:14.636798  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:17.638657  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:20.639654  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:23.640721  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:26.642651  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:29.643601  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:32.645639  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:35.647624  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:38.648379  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:41.650676  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:44.651634  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:47.653582  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:50.654648  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:53.655970  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:56.658210  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:59.658941  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:02.661113  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:05.663405  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:08.664478  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:11.666153  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:14.667567  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:17.668447  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:20.668923  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:23.669615  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:26.671877  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:29.673145  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:32.674637  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:35.677064  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:38.678152  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:41.680118  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:44.681450  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:47.682442  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:50.682884  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:53.683789  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:19:53.683836  838391 ubuntu.go:182] provisioning hostname "ha-472903-m04"
	I0917 00:19:53.683924  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:53.702821  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:53.702901  838391 machine.go:96] duration metric: took 3m0.122343923s to provisionDockerMachine
	I0917 00:19:53.702985  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:19:53.703018  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:53.720196  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:53.720349  838391 retry.go:31] will retry after 273.264226ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:53.994608  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:54.012758  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:54.012877  838391 retry.go:31] will retry after 451.557634ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:54.465611  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:54.483957  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:54.484069  838391 retry.go:31] will retry after 372.513327ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:54.857680  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:54.875097  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:19:54.875215  838391 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:19:54.875229  838391 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:54.875274  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:19:54.875305  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:54.892677  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:54.892775  838391 retry.go:31] will retry after 244.26035ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:55.137223  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:55.156010  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:55.156141  838391 retry.go:31] will retry after 195.694179ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:55.352609  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:55.370515  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:55.370623  838391 retry.go:31] will retry after 349.362306ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:55.720142  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:55.737839  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:55.737968  838391 retry.go:31] will retry after 818.87418ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:56.557986  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:56.575881  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:19:56.576024  838391 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:19:56.576041  838391 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:56.576050  838391 fix.go:56] duration metric: took 3m3.285747581s for fixHost
	I0917 00:19:56.576057  838391 start.go:83] releasing machines lock for "ha-472903-m04", held for 3m3.285773333s
	W0917 00:19:56.576146  838391 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-472903" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:56.578148  838391 out.go:203] 
	W0917 00:19:56.579015  838391 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:19:56.579029  838391 out.go:285] * 
	W0917 00:19:56.580824  838391 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:19:56.581780  838391 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c8a737e1be33c       6e38f40d628db       5 minutes ago       Running             storage-provisioner       4                   fe7a407d2eb97       storage-provisioner
	2a56abb41f49d       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   2c028f64de7ca       kindnet-lh7dv
	b4ccada04ba90       8c811b4aec35f       6 minutes ago       Running             busybox                   1                   8196f32c07b91       busybox-7b57f96db7-6hrm6
	aeea8f1127caf       52546a367cc9e       6 minutes ago       Running             coredns                   1                   91d98fd766ced       coredns-66bc5c9577-qn8m7
	9fc46931c7aae       52546a367cc9e       6 minutes ago       Running             coredns                   1                   5e2ab87af7d54       coredns-66bc5c9577-c94hz
	360a9ae449a3a       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       3                   fe7a407d2eb97       storage-provisioner
	b1c8344888d7d       df0860106674d       6 minutes ago       Running             kube-proxy                1                   b64b7dfe57cfc       kube-proxy-d4m8f
	6ce9c5e712887       765655ea60781       6 minutes ago       Running             kube-vip                  0                   1bc9d50f267a3       kube-vip-ha-472903
	9685cc588651c       46169d968e920       6 minutes ago       Running             kube-scheduler            1                   50f4cca94a4f8       kube-scheduler-ha-472903
	c3f8ee22fca28       a0af72f2ec6d6       6 minutes ago       Running             kube-controller-manager   1                   811d527e0af1e       kube-controller-manager-ha-472903
	96d46a46d9093       90550c43ad2bc       6 minutes ago       Running             kube-apiserver            1                   9fcac3d988698       kube-apiserver-ha-472903
	90b187ed887fa       5f1f5298c888d       6 minutes ago       Running             etcd                      1                   070db27b7a5dd       etcd-ha-472903
	0a41d8b587e02       8c811b4aec35f       21 minutes ago      Exited              busybox                   0                   a2422ee3e6e6d       busybox-7b57f96db7-6hrm6
	9f103b05d2d6f       52546a367cc9e       22 minutes ago      Exited              coredns                   0                   9579263342827       coredns-66bc5c9577-c94hz
	3b457407f10e3       52546a367cc9e       22 minutes ago      Exited              coredns                   0                   290cfb537788e       coredns-66bc5c9577-qn8m7
	cc69d2451cb65       409467f978b4a       23 minutes ago      Exited              kindnet-cni               0                   3e17d6ae9b2a6       kindnet-lh7dv
	92dd4d116eb03       df0860106674d       23 minutes ago      Exited              kube-proxy                0                   8c0ecd5301326       kube-proxy-d4m8f
	bba28cace6502       46169d968e920       23 minutes ago      Exited              kube-scheduler            0                   f18dd7697c60f       kube-scheduler-ha-472903
	087290a41f59c       a0af72f2ec6d6       23 minutes ago      Exited              kube-controller-manager   0                   0760ebe1d2a56       kube-controller-manager-ha-472903
	0aba62132d764       90550c43ad2bc       23 minutes ago      Exited              kube-apiserver            0                   8ad1fa8bc0267       kube-apiserver-ha-472903
	23c0af0bdbe95       5f1f5298c888d       23 minutes ago      Exited              etcd                      0                   b01a62742caec       etcd-ha-472903
	
	
	==> containerd <==
	Sep 17 00:14:06 ha-472903 containerd[478]: time="2025-09-17T00:14:06.742622145Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 17 00:14:07 ha-472903 containerd[478]: time="2025-09-17T00:14:07.230475449Z" level=info msg="RemoveContainer for \"5a5a17cca6c0a643b6c0881dab5508dcb7de8e6ad77d7e6ecb81d434ab2cc8a1\""
	Sep 17 00:14:07 ha-472903 containerd[478]: time="2025-09-17T00:14:07.235120578Z" level=info msg="RemoveContainer for \"5a5a17cca6c0a643b6c0881dab5508dcb7de8e6ad77d7e6ecb81d434ab2cc8a1\" returns successfully"
	Sep 17 00:14:20 ha-472903 containerd[478]: time="2025-09-17T00:14:20.057131193Z" level=info msg="CreateContainer within sandbox \"fe7a407d2eb97d648dbca1e85a5587efe15f437488c6dd3ef99c90d4b44796b2\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:4,}"
	Sep 17 00:14:20 ha-472903 containerd[478]: time="2025-09-17T00:14:20.067657299Z" level=info msg="CreateContainer within sandbox \"fe7a407d2eb97d648dbca1e85a5587efe15f437488c6dd3ef99c90d4b44796b2\" for &ContainerMetadata{Name:storage-provisioner,Attempt:4,} returns container id \"c8a737e1be33c6e4b6e17f5359483d22d3eeb7ca2497546109c1097eb9343a7f\""
	Sep 17 00:14:20 ha-472903 containerd[478]: time="2025-09-17T00:14:20.068219427Z" level=info msg="StartContainer for \"c8a737e1be33c6e4b6e17f5359483d22d3eeb7ca2497546109c1097eb9343a7f\""
	Sep 17 00:14:20 ha-472903 containerd[478]: time="2025-09-17T00:14:20.127854739Z" level=info msg="StartContainer for \"c8a737e1be33c6e4b6e17f5359483d22d3eeb7ca2497546109c1097eb9343a7f\" returns successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.048175952Z" level=info msg="RemoveContainer for \"8683544e2a9d579448e28b8f33653e2c8d1315b2d07bd7b4ce574428d93c6f3a\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.051943763Z" level=info msg="RemoveContainer for \"8683544e2a9d579448e28b8f33653e2c8d1315b2d07bd7b4ce574428d93c6f3a\" returns successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.053740259Z" level=info msg="StopPodSandbox for \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.053865890Z" level=info msg="TearDown network for sandbox \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\" successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.053890854Z" level=info msg="StopPodSandbox for \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\" returns successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.054466776Z" level=info msg="RemovePodSandbox for \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.054510253Z" level=info msg="Forcibly stopping sandbox \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.054597568Z" level=info msg="TearDown network for sandbox \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\" successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.058233686Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.058306846Z" level=info msg="RemovePodSandbox \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\" returns successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.058804033Z" level=info msg="StopPodSandbox for \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.058920078Z" level=info msg="TearDown network for sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.058947458Z" level=info msg="StopPodSandbox for \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" returns successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.059233694Z" level=info msg="RemovePodSandbox for \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.059274964Z" level=info msg="Forcibly stopping sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.059351772Z" level=info msg="TearDown network for sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.062137499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.062200412Z" level=info msg="RemovePodSandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" returns successfully"
	
	
	==> coredns [3b457407f10e357ce33da7fa3fb4333f8312f0d3e3570cf8528cdcac8f5a1d0f] <==
	[INFO] 10.244.1.2:53799 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.009949044s
	[INFO] 10.244.0.4:39485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157098s
	[INFO] 10.244.0.4:57871 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000750185s
	[INFO] 10.244.0.4:53410 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000089028s
	[INFO] 10.244.1.2:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150317s
	[INFO] 10.244.1.2:59346 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028128363s
	[INFO] 10.244.1.2:43091 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01004668s
	[INFO] 10.244.1.2:37227 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000191819s
	[INFO] 10.244.1.2:40079 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125376s
	[INFO] 10.244.0.4:38168 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181114s
	[INFO] 10.244.0.4:60067 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000087147s
	[INFO] 10.244.0.4:47611 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122939s
	[INFO] 10.244.0.4:37626 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121195s
	[INFO] 10.244.1.2:42817 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159509s
	[INFO] 10.244.1.2:33910 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186538s
	[INFO] 10.244.1.2:37929 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109836s
	[INFO] 10.244.0.4:50698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212263s
	[INFO] 10.244.0.4:33166 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100167s
	[INFO] 10.244.1.2:50377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157558s
	[INFO] 10.244.1.2:39491 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132025s
	[INFO] 10.244.1.2:50075 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112028s
	[INFO] 10.244.0.4:58743 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149175s
	[INFO] 10.244.0.4:52796 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114946s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45239 - 14115 "HINFO IN 5883645869461503498.3950535614037284853. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058516241s
	[INFO] 10.244.1.2:55352 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003252862s
	[INFO] 10.244.0.4:33650 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001640931s
	[INFO] 10.244.0.4:50077 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000621363s
	[INFO] 10.244.1.2:48439 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189187s
	[INFO] 10.244.1.2:39582 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151327s
	[INFO] 10.244.1.2:59539 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140715s
	[INFO] 10.244.0.4:42999 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177514s
	[INFO] 10.244.0.4:36769 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010694753s
	[INFO] 10.244.0.4:53074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158932s
	[INFO] 10.244.0.4:57223 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012213s
	[INFO] 10.244.1.2:50810 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176678s
	[INFO] 10.244.0.4:58045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142445s
	[INFO] 10.244.0.4:39777 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123555s
	[INFO] 10.244.1.2:59022 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148853s
	[INFO] 10.244.0.4:45136 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001657s
	[INFO] 10.244.0.4:37711 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134332s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9fc46931c7aae5fea2058b723439b03184beee352ff9a7efcf262818181a635d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60683 - 9436 "HINFO IN 7751308179169184926.6829077423459472962. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019258685s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [aeea8f1127caf7117ade119a9e492104789925a531209d0aba3022cd18cb7ce1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40200 - 1569 "HINFO IN 6158707635578374570.8737516254824064952. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.057247461s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-472903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_56_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:56:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:20:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:17:29 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:17:29 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:17:29 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:17:29 +0000   Tue, 16 Sep 2025 23:56:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-472903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 e92083047f3148b2867b7885ff1f4fb4
	  System UUID:                695af4c7-28fb-4299-9454-75db3262ca2c
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6hrm6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-66bc5c9577-c94hz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23m
	  kube-system                 coredns-66bc5c9577-qn8m7             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23m
	  kube-system                 etcd-ha-472903                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         23m
	  kube-system                 kindnet-lh7dv                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23m
	  kube-system                 kube-apiserver-ha-472903             250m (3%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-472903    200m (2%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-d4m8f                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-472903             100m (1%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-472903                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m29s                  kube-proxy       
	  Normal  Starting                 23m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)      kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  Starting                 23m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)      kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)      kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 23m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     23m                    kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                    kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                    kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           23m                    node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           8m10s                  node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  Starting                 6m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m37s (x8 over 6m37s)  kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s (x8 over 6m37s)  kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s (x7 over 6m37s)  kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m28s                  node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           6m28s                  node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           6m23s                  node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	
	
	Name:               ha-472903-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:20:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:18:52 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:18:52 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:18:52 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:18:52 +0000   Tue, 16 Sep 2025 23:57:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-472903-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 0e1a1fb76ba244e2b9677af4de050ca0
	  System UUID:                85df9db8-f21a-4038-9f8c-4cc1d81dc0d5
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-4jfjt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 etcd-ha-472903-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         22m
	  kube-system                 kindnet-q7c7s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22m
	  kube-system                 kube-apiserver-ha-472903-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-ha-472903-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-58lkb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-ha-472903-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-vip-ha-472903-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 22m                    kube-proxy       
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  NodeAllocatableEnforced  8m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m16s (x7 over 8m16s)  kubelet          Node ha-472903-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m16s (x8 over 8m16s)  kubelet          Node ha-472903-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m16s (x8 over 8m16s)  kubelet          Node ha-472903-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m16s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m10s                  node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  Starting                 6m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m35s (x8 over 6m35s)  kubelet          Node ha-472903-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x8 over 6m35s)  kubelet          Node ha-472903-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x7 over 6m35s)  kubelet          Node ha-472903-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m28s                  node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           6m28s                  node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           6m23s                  node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 e8 75 4b 01 57 08 06
	[  +0.025562] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[ +13.150028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 5c f0 26 cd ba 08 06
	[  +0.000341] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 20 90 fb f5 d8 08 06
	[ +28.639349] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 26 63 8d db 90 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[  +0.836892] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 cc 9b 52 38 94 08 06
	[  +0.080327] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	[Sep16 23:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[ +20.325550] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 39 4b 41 df 63 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[  +8.925776] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e cd c1 f7 dc c8 08 06
	[  +0.000373] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	
	
	==> etcd [23c0af0bdbe9526d53769461ed9f80d8c743b02e625b65cce39c888f5e7d4b4e] <==
	{"level":"warn","ts":"2025-09-17T00:13:08.865242Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017673679127,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-09-17T00:13:09.078092Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:09.078216Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:09.078269Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4061] sent MsgPreVote request to 3aa85cdcd5e5557b at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:09.078323Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4061] sent MsgPreVote request to ab9d0391dce79465 at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:09.078391Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:09.078467Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-09-17T00:13:09.366348Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017673679127,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-17T00:13:09.733983Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"2.00012067s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context deadline exceeded"}
	{"level":"info","ts":"2025-09-17T00:13:09.734106Z","caller":"traceutil/trace.go:172","msg":"trace[1703373101] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"2.000255365s","start":"2025-09-17T00:13:07.733837Z","end":"2025-09-17T00:13:09.734092Z","steps":["trace[1703373101] 'agreement among raft nodes before linearized reading'  (duration: 2.000119103s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:13:09.734220Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:13:07.733823Z","time spent":"2.000381887s","remote":"127.0.0.1:56470","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-09-17T00:13:09.824490Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"10.001550907s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-09-17T00:13:09.830580Z","caller":"traceutil/trace.go:172","msg":"trace[2000130708] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"10.007653802s","start":"2025-09-17T00:12:59.822907Z","end":"2025-09-17T00:13:09.830561Z","steps":["trace[2000130708] 'agreement among raft nodes before linearized reading'  (duration: 10.001549225s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:13:09.830689Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:12:59.822890Z","time spent":"10.007768318s","remote":"127.0.0.1:56876","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	2025/09/17 00:13:09 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-09-17T00:13:09.866876Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017673679127,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-17T00:13:10.366968Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017673679127,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-09-17T00:13:10.478109Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:10.478171Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:10.478195Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4061] sent MsgPreVote request to 3aa85cdcd5e5557b at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:10.478218Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4061] sent MsgPreVote request to ab9d0391dce79465 at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:10.478252Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:10.478278Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-09-17T00:13:10.720561Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:13:03.715662Z","time spent":"7.004893477s","remote":"127.0.0.1:56646","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2025-09-17T00:13:10.867073Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017673679127,"retry-timeout":"500ms"}
	
	
	==> etcd [90b187ed887fae063d0e3d6e7f9316abbc50f1e7b9c092596b43a1c43c86e79d] <==
	{"level":"info","ts":"2025-09-17T00:13:39.653688Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:13:39.662722Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:13:39.663230Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-17T00:13:39.862686Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ab9d0391dce79465","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:13:39.862713Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ab9d0391dce79465","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:20:00.817249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:43226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:00.833811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:43246","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:20:00.852159Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(4226730353838347643 12593026477526642892)"}
	{"level":"info","ts":"2025-09-17T00:20:00.853090Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"ab9d0391dce79465","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-17T00:20:00.853131Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-17T00:20:00.853242Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:20:00.853277Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-17T00:20:00.853321Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:20:00.853330Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:20:00.853433Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-17T00:20:00.853567Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","error":"context canceled"}
	{"level":"warn","ts":"2025-09-17T00:20:00.853601Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"ab9d0391dce79465","error":"failed to read ab9d0391dce79465 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-09-17T00:20:00.853755Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-17T00:20:00.853930Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","error":"context canceled"}
	{"level":"info","ts":"2025-09-17T00:20:00.853965Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:20:00.853994Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:20:00.854008Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:20:00.854032Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-17T00:20:00.860309Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-17T00:20:00.861564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:57216","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:20:06 up  3:02,  0 users,  load average: 0.61, 0.77, 0.87
	Linux ha-472903 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [2a56abb41f49d6755de68bb41070eee7c07fee5950b2584042a3850228b3c274] <==
	I0917 00:19:17.397702       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:19:27.392489       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:19:27.392527       1 main.go:301] handling current node
	I0917 00:19:27.392543       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:19:27.392548       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:19:27.392752       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:19:27.392765       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:19:37.390063       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:19:37.390101       1 main.go:301] handling current node
	I0917 00:19:37.390118       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:19:37.390123       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:19:37.390327       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:19:37.390339       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:19:47.397482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:19:47.397526       1 main.go:301] handling current node
	I0917 00:19:47.397543       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:19:47.397548       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:19:47.397996       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:19:47.398026       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:19:57.390658       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:19:57.390704       1 main.go:301] handling current node
	I0917 00:19:57.390723       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:19:57.390729       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:19:57.390896       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:19:57.391108       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [cc69d2451cb65860b5bc78e027be2fc1cb0f9fa6542b4abe3bc1ff1c90a8fe60] <==
	I0917 00:12:27.503889       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:12:37.507295       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:12:37.507338       1 main.go:301] handling current node
	I0917 00:12:37.507353       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:12:37.507359       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:12:37.507565       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:12:37.507578       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:12:47.503578       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:12:47.503630       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:12:47.503841       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:12:47.503857       1 main.go:301] handling current node
	I0917 00:12:47.503874       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:12:47.503882       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:12:57.503552       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:12:57.503592       1 main.go:301] handling current node
	I0917 00:12:57.503612       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:12:57.503618       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:12:57.504021       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:12:57.504066       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:13:07.510512       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:13:07.510552       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:13:07.511170       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:13:07.511196       1 main.go:301] handling current node
	I0917 00:13:07.511281       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:13:07.511312       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0aba62132d764965d8e1a80a4a6345bb7e34892b23143da4a7af3450cd465d6c] <==
	E0917 00:13:11.166753       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.166775       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.166780       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.166731       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.166754       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.167368       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.167554       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.167606       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.167640       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.167659       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168321       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168332       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168355       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168358       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168761       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168807       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168826       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168844       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168845       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168866       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168873       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168898       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.169017       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.169052       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.169077       1 watcher.go:335] watch chan error: etcdserver: no leader
	
	
	==> kube-apiserver [96d46a46d90937e1dc254cbb641e1f12887151faabbe128f2cc51a8a833fe573] <==
	I0917 00:13:35.109530       1 aggregator.go:171] initial CRD sync complete...
	I0917 00:13:35.109558       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 00:13:35.109566       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 00:13:35.109573       1 cache.go:39] Caches are synced for autoregister controller
	W0917 00:13:35.114733       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3 192.168.49.4]
	I0917 00:13:35.116809       1 controller.go:667] quota admission added evaluator for: endpoints
	I0917 00:13:35.117772       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0917 00:13:35.127627       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0917 00:13:35.133999       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0917 00:13:35.156218       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 00:13:35.994627       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0917 00:13:36.160405       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W0917 00:13:36.454299       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I0917 00:13:38.437732       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:13:38.895584       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0917 00:14:14.427245       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0917 00:14:34.638077       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:14:55.389838       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:15:41.589543       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:16:00.249213       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:16:50.539266       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:17:30.019039       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:18:00.900712       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:18:53.314317       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:19:24.721832       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [087290a41f59caa4f9bc89759bcec6cf90f47c8a2ab83b7c671a8fff35641df9] <==
	I0916 23:56:54.728442       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0916 23:56:54.728466       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:56:54.728485       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0916 23:56:54.728644       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0916 23:56:54.728665       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0916 23:56:54.728648       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0916 23:56:54.728914       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0916 23:56:54.730175       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0916 23:56:54.730201       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0916 23:56:54.732432       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:56:54.733452       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:56:54.735655       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:56:54.735714       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:56:54.735760       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:56:54.735767       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:56:54.735772       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:56:54.740680       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903" podCIDRs=["10.244.0.0/24"]
	I0916 23:56:54.749950       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:22.933124       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m02\" does not exist"
	I0916 23:57:22.943785       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:24.681339       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m02"
	I0916 23:57:51.749676       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m03\" does not exist"
	I0916 23:57:51.772476       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m03" podCIDRs=["10.244.2.0/24"]
	E0916 23:57:51.829801       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"3f5da9fc-6769-4ca8-a715-edeace44c646\", ResourceVersion:\"594\", Generation:1, CreationTimestamp:time.Date(2025, time.September, 16, 23, 56, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00222d0e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"
\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSourc
e)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0021ed7c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdcf8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtu
alDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.34.0\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00144a7e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Re
sourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Life
cycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0019549c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001900b18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ba1200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", Tole
rationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e14570)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001900b70)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailab
le:2, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:57:54.685322       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m03"
	
	
	==> kube-controller-manager [c3f8ee22fca28b303f553c3003d1000b80565b4147ba719401c8c5f61921ee41] <==
	I0917 00:13:38.427005       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:13:38.427138       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0917 00:13:38.428331       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0917 00:13:38.431473       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0917 00:13:38.431610       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0917 00:13:38.431764       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0917 00:13:38.431826       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:13:38.431860       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0917 00:13:38.431926       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0917 00:13:38.431992       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0917 00:13:38.432765       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0917 00:13:38.432816       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0917 00:13:38.432831       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:13:38.432867       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0917 00:13:38.432870       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 00:13:38.433430       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0917 00:13:38.433549       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 00:13:38.433648       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903"
	I0917 00:13:38.433689       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m02"
	I0917 00:13:38.433719       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m03"
	I0917 00:13:38.433784       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 00:13:38.434607       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0917 00:13:38.436471       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0917 00:13:38.443120       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:13:38.447017       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	
	
	==> kube-proxy [92dd4d116eb0387dded82fb32d35690ec2d00e3f5e7ac81bf7aea0c6814edd5e] <==
	I0916 23:56:56.831012       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:56:56.891635       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:56:56.991820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:56:56.991862       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:56:56.991952       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:56:57.015955       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:56:57.016001       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:56:57.021120       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:56:57.021457       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:56:57.021499       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:56:57.024872       1 config.go:200] "Starting service config controller"
	I0916 23:56:57.024892       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:56:57.024900       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:56:57.024909       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:56:57.024890       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:56:57.024917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:56:57.024937       1 config.go:309] "Starting node config controller"
	I0916 23:56:57.024942       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:56:57.125608       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:56:57.125691       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0916 23:56:57.125856       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:56:57.125902       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [b1c8344888d7deab1a3203bf9e16eefcb945905ec04b591acfb2fed3104948ec] <==
	I0917 00:13:36.733439       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:13:36.818219       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:13:36.918912       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:13:36.918966       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:13:36.919071       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:13:36.942838       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:13:36.942910       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:13:36.949958       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:13:36.950427       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:13:36.950467       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:13:36.954376       1 config.go:200] "Starting service config controller"
	I0917 00:13:36.954506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:13:36.954587       1 config.go:309] "Starting node config controller"
	I0917 00:13:36.954660       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:13:36.954669       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:13:36.954703       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:13:36.954712       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:13:36.954729       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:13:36.954736       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:13:37.054981       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:13:37.055026       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:13:37.055057       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9685cc588651ced2d51ab783a94533fff6a60971435eaa8e11982eb715ef5350] <==
	I0917 00:13:30.068882       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:13:35.071453       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:13:35.071492       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:13:35.090261       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:13:35.090310       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:13:35.090614       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:13:35.090722       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:13:35.090743       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:13:35.090760       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:13:35.094479       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0917 00:13:35.094536       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0917 00:13:35.190629       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:13:35.191303       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:13:35.194926       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kube-scheduler [bba28cace6502de93aa43db4fb51671581c5074990dea721d98d36d839734a67] <==
	E0916 23:56:48.619869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:56:48.649766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:56:48.673092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I0916 23:56:49.170967       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 23:57:51.780040       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:57:51.780142       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	E0916 23:57:51.780183       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	I0916 23:57:51.782132       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:58:37.948695       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	E0916 23:58:37.948846       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 565a634f-ab41-4776-ba5d-63a601bfec48(default/busybox-7b57f96db7-x6xc9) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	E0916 23:58:37.948875       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	I0916 23:58:37.950251       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	I0916 23:58:37.966099       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="47b06c15-c007-4c50-a248-5411a0f4b6a7" pod="default/busybox-7b57f96db7-4jfjt" assumedNode="ha-472903-m02" currentNode="ha-472903"
	E0916 23:58:37.968241       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903"
	E0916 23:58:37.968351       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 47b06c15-c007-4c50-a248-5411a0f4b6a7(default/busybox-7b57f96db7-4jfjt) was assumed on ha-472903 but assigned to ha-472903-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	E0916 23:58:37.968376       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	I0916 23:58:37.969472       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903-m02"
	E0916 23:58:38.002469       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-wp95z" node="ha-472903"
	E0916 23:58:38.002779       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:38.046394       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-xnrsc\" not found" pod="default/busybox-7b57f96db7-xnrsc"
	E0916 23:58:38.046880       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-wp95z\" not found" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:40.050124       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	E0916 23:58:40.050213       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod bd03bad4-af1e-42d0-81fb-6fcaeaa8775e(default/busybox-7b57f96db7-6hrm6) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	E0916 23:58:40.050248       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	I0916 23:58:40.051853       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	
	
	==> kubelet <==
	Sep 17 00:13:35 ha-472903 kubelet[620]: I0917 00:13:35.179855     620 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-472903"
	Sep 17 00:13:35 ha-472903 kubelet[620]: E0917 00:13:35.187290     620 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-472903\" already exists" pod="kube-system/etcd-ha-472903"
	Sep 17 00:13:35 ha-472903 kubelet[620]: I0917 00:13:35.187325     620 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-472903"
	Sep 17 00:13:35 ha-472903 kubelet[620]: E0917 00:13:35.196172     620 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-472903\" already exists" pod="kube-system/kube-apiserver-ha-472903"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.029595     620 apiserver.go:52] "Watching apiserver"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.036032     620 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-472903" podUID="ccdab212-cf0c-4bf0-958b-173e1008f7bc"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.052303     620 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-472903"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.052325     620 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-472903"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.131204     620 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.137227     620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-472903" podStartSLOduration=0.137196984 podStartE2EDuration="137.196984ms" podCreationTimestamp="2025-09-17 00:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-17 00:13:36.118811818 +0000 UTC m=+7.151916686" watchObservedRunningTime="2025-09-17 00:13:36.137196984 +0000 UTC m=+7.170301850"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.155169     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4a70eec-48a7-4ea6-871a-1b5ed2beca9a-xtables-lock\") pod \"kube-proxy-d4m8f\" (UID: \"d4a70eec-48a7-4ea6-871a-1b5ed2beca9a\") " pod="kube-system/kube-proxy-d4m8f"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.156175     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1da43ca7-9af7-4573-9cdc-fd21b098ca2c-cni-cfg\") pod \"kindnet-lh7dv\" (UID: \"1da43ca7-9af7-4573-9cdc-fd21b098ca2c\") " pod="kube-system/kindnet-lh7dv"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.156592     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1da43ca7-9af7-4573-9cdc-fd21b098ca2c-lib-modules\") pod \"kindnet-lh7dv\" (UID: \"1da43ca7-9af7-4573-9cdc-fd21b098ca2c\") " pod="kube-system/kindnet-lh7dv"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.156960     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4a70eec-48a7-4ea6-871a-1b5ed2beca9a-lib-modules\") pod \"kube-proxy-d4m8f\" (UID: \"d4a70eec-48a7-4ea6-871a-1b5ed2beca9a\") " pod="kube-system/kube-proxy-d4m8f"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.157372     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1da43ca7-9af7-4573-9cdc-fd21b098ca2c-xtables-lock\") pod \"kindnet-lh7dv\" (UID: \"1da43ca7-9af7-4573-9cdc-fd21b098ca2c\") " pod="kube-system/kindnet-lh7dv"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.157474     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ac7f283e-4d28-46cf-a519-bd227237d5e7-tmp\") pod \"storage-provisioner\" (UID: \"ac7f283e-4d28-46cf-a519-bd227237d5e7\") " pod="kube-system/storage-provisioner"
	Sep 17 00:13:37 ha-472903 kubelet[620]: I0917 00:13:37.056986     620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="021b917bf994c60a5ce7bb1b5d713b5b" path="/var/lib/kubelet/pods/021b917bf994c60a5ce7bb1b5d713b5b/volumes"
	Sep 17 00:13:38 ha-472903 kubelet[620]: I0917 00:13:38.149724     620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 17 00:13:44 ha-472903 kubelet[620]: I0917 00:13:44.396062     620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 17 00:13:44 ha-472903 kubelet[620]: I0917 00:13:44.750098     620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 17 00:14:07 ha-472903 kubelet[620]: I0917 00:14:07.229109     620 scope.go:117] "RemoveContainer" containerID="5a5a17cca6c0a643b6c0881dab5508dcb7de8e6ad77d7e6ecb81d434ab2cc8a1"
	Sep 17 00:14:07 ha-472903 kubelet[620]: I0917 00:14:07.229537     620 scope.go:117] "RemoveContainer" containerID="360a9ae449a3affbb5373c19b5e7e14e1da3ec8397f5e21f1d3c31e298455266"
	Sep 17 00:14:07 ha-472903 kubelet[620]: E0917 00:14:07.229764     620 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ac7f283e-4d28-46cf-a519-bd227237d5e7)\"" pod="kube-system/storage-provisioner" podUID="ac7f283e-4d28-46cf-a519-bd227237d5e7"
	Sep 17 00:14:20 ha-472903 kubelet[620]: I0917 00:14:20.052702     620 scope.go:117] "RemoveContainer" containerID="360a9ae449a3affbb5373c19b5e7e14e1da3ec8397f5e21f1d3c31e298455266"
	Sep 17 00:14:29 ha-472903 kubelet[620]: I0917 00:14:29.046747     620 scope.go:117] "RemoveContainer" containerID="8683544e2a9d579448e28b8f33653e2c8d1315b2d07bd7b4ce574428d93c6f3a"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-472903 -n ha-472903
helpers_test.go:269: (dbg) Run:  kubectl --context ha-472903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-wkqz5
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-472903 describe pod busybox-7b57f96db7-wkqz5
helpers_test.go:290: (dbg) kubectl --context ha-472903 describe pod busybox-7b57f96db7-wkqz5:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-wkqz5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bvn6l (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-bvn6l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  8s    default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  8s    default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  8s    default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  8s    default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  8s    default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  8s    default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (8.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-472903" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-472903\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-472903\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares
\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.0\",\"ClusterName\":\"ha-472903\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"containerd\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"containerd\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m
02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"containerd\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\
":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"
SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-472903
helpers_test.go:243: (dbg) docker inspect ha-472903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	        "Created": "2025-09-16T23:56:35.178831158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 838588,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:13:23.170247962Z",
	            "FinishedAt": "2025-09-17T00:13:22.548619261Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hostname",
	        "HostsPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hosts",
	        "LogPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047-json.log",
	        "Name": "/ha-472903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-472903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-472903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	                "LowerDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-472903",
	                "Source": "/var/lib/docker/volumes/ha-472903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-472903",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-472903",
	                "name.minikube.sigs.k8s.io": "ha-472903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f681bc3451c2f9b5cdb2156ffcba04f0e713f66cdf73bde32e7115dbf471fa7b",
	            "SandboxKey": "/var/run/docker/netns/f681bc3451c2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33574"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33575"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33578"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33576"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33577"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-472903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:43:7c:dc:22:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "22d49b2f397dfabc2a3967bd54b05204a52976e683f65ff07bff00e793040bef",
	                    "EndpointID": "4140add73c3678ffb48555035c60424ac6e443ed664566963b98cd7acf01832d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-472903",
	                        "05f03528ecc5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-472903 -n ha-472903
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-472903 logs -n 25: (1.398381825s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m02 sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m02.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ cp      │ ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903-m04:/home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp testdata/cp-test.txt ha-472903-m04:/home/docker/cp-test.txt                                                             │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970888916/001/cp-test_ha-472903-m04.txt │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903:/home/docker/cp-test_ha-472903-m04_ha-472903.txt                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903.txt                                                 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m02:/home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m02 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m03:/home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ node    │ ha-472903 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ node    │ ha-472903 node start m02 --alsologtostderr -v 5                                                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ node    │ ha-472903 node list --alsologtostderr -v 5                                                                                           │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ stop    │ ha-472903 stop --alsologtostderr -v 5                                                                                                │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:13 UTC │
	│ start   │ ha-472903 start --wait true --alsologtostderr -v 5                                                                                   │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:13 UTC │                     │
	│ node    │ ha-472903 node list --alsologtostderr -v 5                                                                                           │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:19 UTC │                     │
	│ node    │ ha-472903 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:19 UTC │ 17 Sep 25 00:20 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:13:22
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:13:22.953197  838391 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:13:22.953530  838391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:13:22.953542  838391 out.go:374] Setting ErrFile to fd 2...
	I0917 00:13:22.953549  838391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:13:22.953766  838391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:13:22.954306  838391 out.go:368] Setting JSON to false
	I0917 00:13:22.955398  838391 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":10545,"bootTime":1758057458,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:13:22.955520  838391 start.go:140] virtualization: kvm guest
	I0917 00:13:22.957510  838391 out.go:179] * [ha-472903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:13:22.958615  838391 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:13:22.958642  838391 notify.go:220] Checking for updates...
	I0917 00:13:22.960507  838391 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:13:22.961674  838391 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:13:22.962866  838391 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0917 00:13:22.964443  838391 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:13:22.965391  838391 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:13:22.966891  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:22.966986  838391 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:13:22.992446  838391 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:13:22.992525  838391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:13:23.045449  838391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:13:23.034509691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:13:23.045556  838391 docker.go:318] overlay module found
	I0917 00:13:23.047016  838391 out.go:179] * Using the docker driver based on existing profile
	I0917 00:13:23.047922  838391 start.go:304] selected driver: docker
	I0917 00:13:23.047937  838391 start.go:918] validating driver "docker" against &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:13:23.048084  838391 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:13:23.048209  838391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:13:23.101147  838391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:13:23.091009521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:13:23.102012  838391 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:13:23.102057  838391 cni.go:84] Creating CNI manager for ""
	I0917 00:13:23.102129  838391 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:13:23.102195  838391 start.go:348] cluster config:
	{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:13:23.103903  838391 out.go:179] * Starting "ha-472903" primary control-plane node in "ha-472903" cluster
	I0917 00:13:23.104759  838391 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:13:23.105814  838391 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:13:23.106795  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:23.106833  838391 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0917 00:13:23.106844  838391 cache.go:58] Caching tarball of preloaded images
	I0917 00:13:23.106881  838391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:13:23.106921  838391 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:13:23.106932  838391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:13:23.107045  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:23.127051  838391 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:13:23.127078  838391 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:13:23.127093  838391 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:13:23.127117  838391 start.go:360] acquireMachinesLock for ha-472903: {Name:mk994658ce3314f2aed1dec341debc49d36a4326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:13:23.127173  838391 start.go:364] duration metric: took 38.444µs to acquireMachinesLock for "ha-472903"
	I0917 00:13:23.127192  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:13:23.127199  838391 fix.go:54] fixHost starting: 
	I0917 00:13:23.127403  838391 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:13:23.144605  838391 fix.go:112] recreateIfNeeded on ha-472903: state=Stopped err=<nil>
	W0917 00:13:23.144651  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:13:23.146403  838391 out.go:252] * Restarting existing docker container for "ha-472903" ...
	I0917 00:13:23.146471  838391 cli_runner.go:164] Run: docker start ha-472903
	I0917 00:13:23.362855  838391 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:13:23.380820  838391 kic.go:430] container "ha-472903" state is running.
	I0917 00:13:23.381209  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:13:23.398851  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:23.399057  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:13:23.399113  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:23.416213  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:23.416490  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I0917 00:13:23.416505  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:13:23.417056  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37384->127.0.0.1:33574: read: connection reset by peer
	I0917 00:13:26.554176  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0917 00:13:26.554202  838391 ubuntu.go:182] provisioning hostname "ha-472903"
	I0917 00:13:26.554275  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:26.572576  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:26.572800  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I0917 00:13:26.572813  838391 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903 && echo "ha-472903" | sudo tee /etc/hostname
	I0917 00:13:26.719562  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0917 00:13:26.719659  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:26.737757  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:26.738008  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I0917 00:13:26.738032  838391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:13:26.872954  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:13:26.872993  838391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:13:26.873020  838391 ubuntu.go:190] setting up certificates
	I0917 00:13:26.873033  838391 provision.go:84] configureAuth start
	I0917 00:13:26.873086  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:13:26.891066  838391 provision.go:143] copyHostCerts
	I0917 00:13:26.891111  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:26.891147  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:13:26.891169  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:26.891262  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:13:26.891384  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:26.891432  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:13:26.891443  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:26.891485  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:13:26.891575  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:26.891600  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:13:26.891610  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:26.891648  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:13:26.891725  838391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903 san=[127.0.0.1 192.168.49.2 ha-472903 localhost minikube]
	I0917 00:13:27.127844  838391 provision.go:177] copyRemoteCerts
	I0917 00:13:27.127908  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:13:27.127972  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.146507  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.243455  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:13:27.243525  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:13:27.269313  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:13:27.269382  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 00:13:27.294966  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:13:27.295048  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:13:27.320815  838391 provision.go:87] duration metric: took 447.761849ms to configureAuth
	I0917 00:13:27.320860  838391 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:13:27.321072  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:27.321085  838391 machine.go:96] duration metric: took 3.922015218s to provisionDockerMachine
	I0917 00:13:27.321092  838391 start.go:293] postStartSetup for "ha-472903" (driver="docker")
	I0917 00:13:27.321102  838391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:13:27.321150  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:13:27.321188  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.339742  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.437715  838391 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:13:27.441468  838391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:13:27.441498  838391 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:13:27.441506  838391 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:13:27.441513  838391 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:13:27.441524  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:13:27.441576  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:13:27.441647  838391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:13:27.441657  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0917 00:13:27.441747  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:13:27.451010  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:27.477190  838391 start.go:296] duration metric: took 156.078591ms for postStartSetup
	I0917 00:13:27.477273  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:13:27.477311  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.495838  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.588631  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:13:27.593367  838391 fix.go:56] duration metric: took 4.46615876s for fixHost
	I0917 00:13:27.593398  838391 start.go:83] releasing machines lock for "ha-472903", held for 4.466212718s
	I0917 00:13:27.593488  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:13:27.611894  838391 ssh_runner.go:195] Run: cat /version.json
	I0917 00:13:27.611963  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.611984  838391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:13:27.612068  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:27.630790  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.632015  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:27.723564  838391 ssh_runner.go:195] Run: systemctl --version
	I0917 00:13:27.805571  838391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:13:27.810704  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:13:27.829982  838391 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:13:27.830056  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:13:27.839307  838391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:13:27.839334  838391 start.go:495] detecting cgroup driver to use...
	I0917 00:13:27.839374  838391 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:13:27.839455  838391 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:13:27.853620  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:13:27.866086  838391 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:13:27.866143  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:13:27.879568  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:13:27.891699  838391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:13:27.957039  838391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:13:28.019649  838391 docker.go:234] disabling docker service ...
	I0917 00:13:28.019719  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:13:28.032725  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:13:28.045044  838391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:13:28.110090  838391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:13:28.176290  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:13:28.188485  838391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:13:28.206191  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:13:28.216912  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:13:28.227586  838391 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:13:28.227653  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:13:28.238198  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:28.248607  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:13:28.258883  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:28.269300  838391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:13:28.279692  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:13:28.290638  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:13:28.301524  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:13:28.312695  838391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:13:28.321821  838391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:13:28.331494  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:28.395408  838391 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:13:28.510345  838391 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:13:28.510442  838391 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:13:28.514486  838391 start.go:563] Will wait 60s for crictl version
	I0917 00:13:28.514543  838391 ssh_runner.go:195] Run: which crictl
	I0917 00:13:28.518058  838391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:13:28.553392  838391 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:13:28.553470  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:28.578186  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:28.607037  838391 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:13:28.608343  838391 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:13:28.625981  838391 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:13:28.630074  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:28.642270  838391 kubeadm.go:875] updating cluster {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:13:28.642447  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:28.642500  838391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:13:28.677502  838391 containerd.go:627] all images are preloaded for containerd runtime.
	I0917 00:13:28.677528  838391 containerd.go:534] Images already preloaded, skipping extraction
	I0917 00:13:28.677596  838391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:13:28.711767  838391 containerd.go:627] all images are preloaded for containerd runtime.
	I0917 00:13:28.711790  838391 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:13:28.711799  838391 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0917 00:13:28.711898  838391 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:13:28.711952  838391 ssh_runner.go:195] Run: sudo crictl info
	I0917 00:13:28.748238  838391 cni.go:84] Creating CNI manager for ""
	I0917 00:13:28.748269  838391 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:13:28.748282  838391 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:13:28.748301  838391 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-472903 NodeName:ha-472903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:13:28.748434  838391 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-472903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:13:28.748456  838391 kube-vip.go:115] generating kube-vip config ...
	I0917 00:13:28.748504  838391 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:13:28.761835  838391 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:13:28.761950  838391 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:13:28.762005  838391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:13:28.771377  838391 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:13:28.771466  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:13:28.780815  838391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0917 00:13:28.799673  838391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:13:28.818695  838391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0917 00:13:28.837443  838391 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:13:28.856629  838391 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:13:28.860342  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:28.871978  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:28.937920  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:28.965162  838391 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.2
	I0917 00:13:28.965183  838391 certs.go:194] generating shared ca certs ...
	I0917 00:13:28.965200  838391 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:28.965352  838391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:13:28.965429  838391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:13:28.965446  838391 certs.go:256] generating profile certs ...
	I0917 00:13:28.965567  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0917 00:13:28.965609  838391 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.8acf531c
	I0917 00:13:28.965631  838391 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.8acf531c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0917 00:13:28.981661  838391 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.8acf531c ...
	I0917 00:13:28.981698  838391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.8acf531c: {Name:mkdef0e1cbf73e7227a698510b51d68a698391c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:28.981868  838391 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.8acf531c ...
	I0917 00:13:28.981880  838391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.8acf531c: {Name:mk80b61f5fe8d635199050a211c5a719c4b8f9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:28.981959  838391 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.8acf531c -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0917 00:13:28.982123  838391 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.8acf531c -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0917 00:13:28.982267  838391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0917 00:13:28.982283  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:13:28.982296  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:13:28.982309  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:13:28.982327  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:13:28.982340  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:13:28.982352  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:13:28.982367  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:13:28.982379  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:13:28.982446  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:13:28.982481  838391 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:13:28.982491  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:13:28.982517  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:13:28.982539  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:13:28.982559  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:13:28.982598  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:28.982624  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0917 00:13:28.982638  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0917 00:13:28.982650  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:28.983259  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:13:29.011855  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:13:29.044116  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:13:29.076632  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:13:29.102081  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:13:29.127618  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:13:29.154054  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:13:29.181152  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:13:29.207152  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:13:29.234803  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:13:29.261065  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:13:29.285817  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:13:29.304802  838391 ssh_runner.go:195] Run: openssl version
	I0917 00:13:29.310548  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:13:29.321280  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:13:29.325168  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:13:29.325220  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:13:29.332550  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:13:29.342450  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:13:29.352677  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:29.356484  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:29.356557  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:29.363671  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:13:29.373502  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:13:29.383350  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:13:29.386969  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:13:29.387020  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:13:29.393845  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:13:29.402996  838391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:13:29.406679  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:13:29.413276  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:13:29.420039  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:13:29.426813  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:13:29.433710  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:13:29.440812  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:13:29.447756  838391 kubeadm.go:392] StartCluster: {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:13:29.447896  838391 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0917 00:13:29.447983  838391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:13:29.484343  838391 cri.go:89] found id: "5a5a17cca6c0a643b6c0881dab5508dcb7de8e6ad77d7e6ecb81d434ab2cc8a1"
	I0917 00:13:29.484364  838391 cri.go:89] found id: "8683544e2a9d579448e28b8f33653e2c8d1315b2d07bd7b4ce574428d93c6f3a"
	I0917 00:13:29.484368  838391 cri.go:89] found id: "9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315"
	I0917 00:13:29.484373  838391 cri.go:89] found id: "3b457407f10e357ce33da7fa3fb4333f8312f0d3e3570cf8528cdcac8f5a1d0f"
	I0917 00:13:29.484376  838391 cri.go:89] found id: "cc69d2451cb65860b5bc78e027be2fc1cb0f9fa6542b4abe3bc1ff1c90a8fe60"
	I0917 00:13:29.484379  838391 cri.go:89] found id: "92dd4d116eb0387dded82fb32d35690ec2d00e3f5e7ac81bf7aea0c6814edd5e"
	I0917 00:13:29.484382  838391 cri.go:89] found id: "bba28cace6502de93aa43db4fb51671581c5074990dea721d98d36d839734a67"
	I0917 00:13:29.484384  838391 cri.go:89] found id: "087290a41f59caa4f9bc89759bcec6cf90f47c8a2ab83b7c671a8fff35641df9"
	I0917 00:13:29.484387  838391 cri.go:89] found id: "0aba62132d764965d8e1a80a4a6345bb7e34892b23143da4a7af3450cd465d6c"
	I0917 00:13:29.484395  838391 cri.go:89] found id: "23c0af0bdbe9526d53769461ed9f80d8c743b02e625b65cce39c888f5e7d4b4e"
	I0917 00:13:29.484398  838391 cri.go:89] found id: ""
	I0917 00:13:29.484470  838391 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0917 00:13:29.498073  838391 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T00:13:29Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0917 00:13:29.498177  838391 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:13:29.508791  838391 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:13:29.508813  838391 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:13:29.508861  838391 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:13:29.519962  838391 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:13:29.520528  838391 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-472903" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:13:29.520700  838391 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-749120/kubeconfig needs updating (will repair): [kubeconfig missing "ha-472903" cluster setting kubeconfig missing "ha-472903" context setting]
	I0917 00:13:29.521229  838391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:29.521963  838391 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:13:29.522552  838391 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:13:29.522579  838391 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:13:29.522586  838391 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:13:29.522592  838391 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:13:29.522598  838391 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:13:29.522631  838391 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:13:29.523130  838391 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:13:29.536212  838391 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:13:29.536248  838391 kubeadm.go:593] duration metric: took 27.419363ms to restartPrimaryControlPlane
	I0917 00:13:29.536260  838391 kubeadm.go:394] duration metric: took 88.513961ms to StartCluster
	I0917 00:13:29.536281  838391 settings.go:142] acquiring lock: {Name:mk6c1a5bee23e141aad5180323c16c47ed580ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:29.536352  838391 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:13:29.537180  838391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:29.537465  838391 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:13:29.537498  838391 start.go:241] waiting for startup goroutines ...
	I0917 00:13:29.537509  838391 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:13:29.537779  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:29.539896  838391 out.go:179] * Enabled addons: 
	I0917 00:13:29.541345  838391 addons.go:514] duration metric: took 3.828487ms for enable addons: enabled=[]
	I0917 00:13:29.541404  838391 start.go:246] waiting for cluster config update ...
	I0917 00:13:29.541459  838391 start.go:255] writing updated cluster config ...
	I0917 00:13:29.543184  838391 out.go:203] 
	I0917 00:13:29.548360  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:29.548520  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:29.550284  838391 out.go:179] * Starting "ha-472903-m02" control-plane node in "ha-472903" cluster
	I0917 00:13:29.551514  838391 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:13:29.552445  838391 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:13:29.554184  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:29.554221  838391 cache.go:58] Caching tarball of preloaded images
	I0917 00:13:29.554326  838391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:13:29.554361  838391 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:13:29.554376  838391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:13:29.554541  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:29.581238  838391 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:13:29.581265  838391 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:13:29.581286  838391 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:13:29.581322  838391 start.go:360] acquireMachinesLock for ha-472903-m02: {Name:mk81d8c73856cf84ceff1767a1681f3f3cdab773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:13:29.581402  838391 start.go:364] duration metric: took 53.081µs to acquireMachinesLock for "ha-472903-m02"
	I0917 00:13:29.581447  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:13:29.581461  838391 fix.go:54] fixHost starting: m02
	I0917 00:13:29.581795  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:13:29.604878  838391 fix.go:112] recreateIfNeeded on ha-472903-m02: state=Stopped err=<nil>
	W0917 00:13:29.604915  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:13:29.607517  838391 out.go:252] * Restarting existing docker container for "ha-472903-m02" ...
	I0917 00:13:29.607600  838391 cli_runner.go:164] Run: docker start ha-472903-m02
	I0917 00:13:29.911119  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:13:29.930731  838391 kic.go:430] container "ha-472903-m02" state is running.
	I0917 00:13:29.931116  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:13:29.951026  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:29.951305  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:13:29.951370  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:29.974010  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:29.974330  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I0917 00:13:29.974348  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:13:29.975092  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37012->127.0.0.1:33579: read: connection reset by peer
	I0917 00:13:33.111351  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0917 00:13:33.111379  838391 ubuntu.go:182] provisioning hostname "ha-472903-m02"
	I0917 00:13:33.111466  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:33.129914  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:33.130125  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I0917 00:13:33.130138  838391 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m02 && echo "ha-472903-m02" | sudo tee /etc/hostname
	I0917 00:13:33.276390  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0917 00:13:33.276473  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:33.295322  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:33.295578  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I0917 00:13:33.295626  838391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:13:33.430221  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:13:33.430255  838391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:13:33.430276  838391 ubuntu.go:190] setting up certificates
	I0917 00:13:33.430293  838391 provision.go:84] configureAuth start
	I0917 00:13:33.430347  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:13:33.447859  838391 provision.go:143] copyHostCerts
	I0917 00:13:33.447896  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:33.447924  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:13:33.447931  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:33.447997  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:13:33.448082  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:33.448101  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:13:33.448105  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:33.448129  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:13:33.448171  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:33.448188  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:13:33.448194  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:33.448221  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:13:33.448284  838391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m02 san=[127.0.0.1 192.168.49.3 ha-472903-m02 localhost minikube]
	I0917 00:13:33.772202  838391 provision.go:177] copyRemoteCerts
	I0917 00:13:33.772271  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:13:33.772308  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:33.790580  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:33.888743  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:13:33.888811  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:13:33.915641  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:13:33.915714  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:13:33.947505  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:13:33.947576  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:13:33.982626  838391 provision.go:87] duration metric: took 552.315533ms to configureAuth
	I0917 00:13:33.982666  838391 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:13:33.983009  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:33.983035  838391 machine.go:96] duration metric: took 4.031716501s to provisionDockerMachine
	I0917 00:13:33.983048  838391 start.go:293] postStartSetup for "ha-472903-m02" (driver="docker")
	I0917 00:13:33.983079  838391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:13:33.983149  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:13:33.983189  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:34.006390  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:34.114836  838391 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:13:34.122569  838391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:13:34.122609  838391 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:13:34.122622  838391 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:13:34.122631  838391 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:13:34.122648  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:13:34.122715  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:13:34.122819  838391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:13:34.122842  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0917 00:13:34.122963  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:13:34.133119  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:34.163792  838391 start.go:296] duration metric: took 180.726136ms for postStartSetup
	I0917 00:13:34.163881  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:13:34.163931  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:34.187017  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:34.289000  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:13:34.295122  838391 fix.go:56] duration metric: took 4.713651457s for fixHost
	I0917 00:13:34.295149  838391 start.go:83] releasing machines lock for "ha-472903-m02", held for 4.713713361s
	I0917 00:13:34.295238  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:13:34.323055  838391 out.go:179] * Found network options:
	I0917 00:13:34.324886  838391 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:13:34.326740  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:13:34.326797  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:13:34.326881  838391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:13:34.326949  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:34.327068  838391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:13:34.327142  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:13:34.349495  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:34.351023  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:13:34.450454  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:13:34.547618  838391 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:13:34.547706  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:13:34.558822  838391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:13:34.558854  838391 start.go:495] detecting cgroup driver to use...
	I0917 00:13:34.558889  838391 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:13:34.558939  838391 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:13:34.584135  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:13:34.599048  838391 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:13:34.599118  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:13:34.615043  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:13:34.627813  838391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:13:34.751575  838391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:13:34.913336  838391 docker.go:234] disabling docker service ...
	I0917 00:13:34.913429  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:13:34.943843  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:13:34.964995  838391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:13:35.154858  838391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:13:35.276803  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:13:35.292337  838391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:13:35.312501  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:13:35.325061  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:13:35.337094  838391 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:13:35.337162  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:13:35.349635  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:35.361644  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:13:35.373144  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:35.385968  838391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:13:35.397684  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:13:35.409662  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:13:35.422089  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:13:35.433950  838391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:13:35.445355  838391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:13:35.456096  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:35.554404  838391 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:13:35.775103  838391 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:13:35.775175  838391 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:13:35.780034  838391 start.go:563] Will wait 60s for crictl version
	I0917 00:13:35.780106  838391 ssh_runner.go:195] Run: which crictl
	I0917 00:13:35.784109  838391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:13:35.826151  838391 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:13:35.826224  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:35.852960  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:35.877876  838391 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:13:35.879103  838391 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:13:35.880100  838391 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:13:35.897195  838391 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:13:35.901082  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:35.912748  838391 mustload.go:65] Loading cluster: ha-472903
	I0917 00:13:35.912967  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:35.913168  838391 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:13:35.931969  838391 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:13:35.932217  838391 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.3
	I0917 00:13:35.932230  838391 certs.go:194] generating shared ca certs ...
	I0917 00:13:35.932244  838391 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:35.932358  838391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:13:35.932394  838391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:13:35.932404  838391 certs.go:256] generating profile certs ...
	I0917 00:13:35.932495  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0917 00:13:35.932546  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.b92722b6
	I0917 00:13:35.932585  838391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0917 00:13:35.932596  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:13:35.932607  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:13:35.932619  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:13:35.932630  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:13:35.932643  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:13:35.932656  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:13:35.932668  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:13:35.932681  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:13:35.932726  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:13:35.932752  838391 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:13:35.932761  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:13:35.932781  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:13:35.932801  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:13:35.932822  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:13:35.932861  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:35.932888  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0917 00:13:35.932902  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0917 00:13:35.932914  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:35.932957  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:35.950361  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:36.038689  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:13:36.046320  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:13:36.065517  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:13:36.070746  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:13:36.088267  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:13:36.093060  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:13:36.109798  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:13:36.114630  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:13:36.132250  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:13:36.137979  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:13:36.158118  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:13:36.163359  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 00:13:36.183892  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:13:36.221052  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:13:36.260302  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:13:36.294497  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:13:36.328388  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:13:36.364809  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:13:36.406406  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:13:36.458823  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:13:36.524795  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:13:36.572655  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:13:36.619864  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:13:36.672387  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:13:36.709674  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:13:36.746751  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:13:36.783161  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:13:36.813099  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:13:36.837070  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 00:13:36.858764  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:13:36.877818  838391 ssh_runner.go:195] Run: openssl version
	I0917 00:13:36.883443  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:13:36.894826  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:13:36.899068  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:13:36.899146  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:13:36.907246  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:13:36.916910  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:13:36.927032  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:36.930914  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:36.930968  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:36.940300  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:13:36.953573  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:13:36.967306  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:13:36.971796  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:13:36.971852  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:13:36.981091  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:13:36.991490  838391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:13:36.995167  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:13:37.003067  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:13:37.009863  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:13:37.016575  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:13:37.023485  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:13:37.032694  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:13:37.042763  838391 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0917 00:13:37.042877  838391 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:13:37.042911  838391 kube-vip.go:115] generating kube-vip config ...
	I0917 00:13:37.042948  838391 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:13:37.060530  838391 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:13:37.060601  838391 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:13:37.060658  838391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:13:37.072293  838391 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:13:37.072371  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:13:37.084220  838391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 00:13:37.109777  838391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:13:37.137135  838391 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:13:37.165385  838391 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:13:37.170106  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:37.186447  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:37.337215  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:37.351480  838391 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:13:37.351795  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:37.353499  838391 out.go:179] * Verifying Kubernetes components...
	I0917 00:13:37.354663  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:37.476140  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:37.492755  838391 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:13:37.492840  838391 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:13:37.493129  838391 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m02" to be "Ready" ...
	I0917 00:13:37.501768  838391 node_ready.go:49] node "ha-472903-m02" is "Ready"
	I0917 00:13:37.501795  838391 node_ready.go:38] duration metric: took 8.646756ms for node "ha-472903-m02" to be "Ready" ...
	I0917 00:13:37.501810  838391 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:13:37.501850  838391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:13:37.513878  838391 api_server.go:72] duration metric: took 162.352734ms to wait for apiserver process to appear ...
	I0917 00:13:37.513902  838391 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:13:37.513995  838391 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:13:37.519494  838391 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:13:37.520502  838391 api_server.go:141] control plane version: v1.34.0
	I0917 00:13:37.520525  838391 api_server.go:131] duration metric: took 6.615829ms to wait for apiserver health ...
	I0917 00:13:37.520533  838391 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:13:37.529003  838391 system_pods.go:59] 24 kube-system pods found
	I0917 00:13:37.529040  838391 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:37.529049  838391 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:37.529058  838391 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:37.529064  838391 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:37.529068  838391 system_pods.go:61] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running
	I0917 00:13:37.529072  838391 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:13:37.529075  838391 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0917 00:13:37.529078  838391 system_pods.go:61] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:13:37.529083  838391 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:37.529092  838391 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:37.529096  838391 system_pods.go:61] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running
	I0917 00:13:37.529102  838391 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:37.529110  838391 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:37.529113  838391 system_pods.go:61] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running
	I0917 00:13:37.529118  838391 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0917 00:13:37.529121  838391 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:13:37.529125  838391 system_pods.go:61] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:13:37.529131  838391 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:37.529136  838391 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:37.529144  838391 system_pods.go:61] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running
	I0917 00:13:37.529147  838391 system_pods.go:61] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:13:37.529150  838391 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:13:37.529153  838391 system_pods.go:61] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:13:37.529156  838391 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:13:37.529161  838391 system_pods.go:74] duration metric: took 8.623694ms to wait for pod list to return data ...
	I0917 00:13:37.529167  838391 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:13:37.531877  838391 default_sa.go:45] found service account: "default"
	I0917 00:13:37.531901  838391 default_sa.go:55] duration metric: took 2.728819ms for default service account to be created ...
	I0917 00:13:37.531910  838391 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:13:37.538254  838391 system_pods.go:86] 24 kube-system pods found
	I0917 00:13:37.538287  838391 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:37.538298  838391 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:37.538308  838391 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:37.538315  838391 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:37.538321  838391 system_pods.go:89] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running
	I0917 00:13:37.538327  838391 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:13:37.538333  838391 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0917 00:13:37.538340  838391 system_pods.go:89] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:13:37.538353  838391 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:37.538366  838391 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:37.538373  838391 system_pods.go:89] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running
	I0917 00:13:37.538383  838391 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:37.538396  838391 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:37.538406  838391 system_pods.go:89] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running
	I0917 00:13:37.538447  838391 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0917 00:13:37.538457  838391 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:13:37.538465  838391 system_pods.go:89] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:13:37.538479  838391 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:37.538492  838391 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:37.538504  838391 system_pods.go:89] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running
	I0917 00:13:37.538511  838391 system_pods.go:89] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:13:37.538517  838391 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:13:37.538523  838391 system_pods.go:89] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:13:37.538528  838391 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:13:37.538538  838391 system_pods.go:126] duration metric: took 6.620318ms to wait for k8s-apps to be running ...
	I0917 00:13:37.538550  838391 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:13:37.538595  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:13:37.551380  838391 system_svc.go:56] duration metric: took 12.817524ms WaitForService to wait for kubelet
	I0917 00:13:37.551421  838391 kubeadm.go:578] duration metric: took 199.889741ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:13:37.551446  838391 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:13:37.554601  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:37.554630  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:37.554646  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:37.554651  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:37.554657  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:37.554661  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:37.554667  838391 node_conditions.go:105] duration metric: took 3.21568ms to run NodePressure ...
	I0917 00:13:37.554682  838391 start.go:241] waiting for startup goroutines ...
	I0917 00:13:37.554713  838391 start.go:255] writing updated cluster config ...
	I0917 00:13:37.556785  838391 out.go:203] 
	I0917 00:13:37.558118  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:37.558205  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:37.560287  838391 out.go:179] * Starting "ha-472903-m03" control-plane node in "ha-472903" cluster
	I0917 00:13:37.561674  838391 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:13:37.562756  838391 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:13:37.563720  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:37.563746  838391 cache.go:58] Caching tarball of preloaded images
	I0917 00:13:37.563814  838391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:13:37.563852  838391 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:13:37.563866  838391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:13:37.563958  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:37.584605  838391 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:13:37.584624  838391 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:13:37.584638  838391 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:13:37.584670  838391 start.go:360] acquireMachinesLock for ha-472903-m03: {Name:mk61000bb8e4699ca3310a7fc257e30a156b69de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:13:37.584735  838391 start.go:364] duration metric: took 44.453µs to acquireMachinesLock for "ha-472903-m03"
	I0917 00:13:37.584761  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:13:37.584768  838391 fix.go:54] fixHost starting: m03
	I0917 00:13:37.585018  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:13:37.604118  838391 fix.go:112] recreateIfNeeded on ha-472903-m03: state=Stopped err=<nil>
	W0917 00:13:37.604141  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:13:37.606555  838391 out.go:252] * Restarting existing docker container for "ha-472903-m03" ...
	I0917 00:13:37.606618  838391 cli_runner.go:164] Run: docker start ha-472903-m03
	I0917 00:13:37.854742  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m03 --format={{.State.Status}}
	I0917 00:13:37.873167  838391 kic.go:430] container "ha-472903-m03" state is running.
	I0917 00:13:37.873554  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:13:37.894030  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:37.894294  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:13:37.894371  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:37.912571  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:37.912785  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33584 <nil> <nil>}
	I0917 00:13:37.912796  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:13:37.913480  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50250->127.0.0.1:33584: read: connection reset by peer
	I0917 00:13:41.078339  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0917 00:13:41.078371  838391 ubuntu.go:182] provisioning hostname "ha-472903-m03"
	I0917 00:13:41.078468  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:41.099623  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:41.099906  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33584 <nil> <nil>}
	I0917 00:13:41.099929  838391 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m03 && echo "ha-472903-m03" | sudo tee /etc/hostname
	I0917 00:13:41.256611  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m03
	
	I0917 00:13:41.256681  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:41.275951  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:41.276266  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33584 <nil> <nil>}
	I0917 00:13:41.276291  838391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:13:41.413177  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:13:41.413213  838391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:13:41.413235  838391 ubuntu.go:190] setting up certificates
	I0917 00:13:41.413252  838391 provision.go:84] configureAuth start
	I0917 00:13:41.413326  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:13:41.432242  838391 provision.go:143] copyHostCerts
	I0917 00:13:41.432284  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:41.432323  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:13:41.432334  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:13:41.432427  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:13:41.432522  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:41.432547  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:13:41.432556  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:13:41.432591  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:13:41.432652  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:41.432676  838391 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:13:41.432684  838391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:13:41.432717  838391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:13:41.432785  838391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m03 san=[127.0.0.1 192.168.49.4 ha-472903-m03 localhost minikube]
	I0917 00:13:41.862573  838391 provision.go:177] copyRemoteCerts
	I0917 00:13:41.862629  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:13:41.862665  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:41.885400  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:41.994335  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:13:41.994423  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:13:42.028538  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:13:42.028607  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:13:42.067649  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:13:42.067726  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:13:42.099602  838391 provision.go:87] duration metric: took 686.33067ms to configureAuth
	I0917 00:13:42.099636  838391 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:13:42.099920  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:42.099938  838391 machine.go:96] duration metric: took 4.205627363s to provisionDockerMachine
	I0917 00:13:42.099950  838391 start.go:293] postStartSetup for "ha-472903-m03" (driver="docker")
	I0917 00:13:42.099962  838391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:13:42.100117  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:13:42.100183  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:42.122141  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:42.233836  838391 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:13:42.238854  838391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:13:42.238889  838391 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:13:42.238900  838391 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:13:42.238908  838391 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:13:42.238924  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:13:42.238985  838391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:13:42.239080  838391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:13:42.239088  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0917 00:13:42.239207  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:13:42.256636  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:42.284884  838391 start.go:296] duration metric: took 184.914637ms for postStartSetup
	I0917 00:13:42.284980  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:13:42.285038  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:42.306309  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:42.403953  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:13:42.409407  838391 fix.go:56] duration metric: took 4.824632112s for fixHost
	I0917 00:13:42.409462  838391 start.go:83] releasing machines lock for "ha-472903-m03", held for 4.824710137s
	I0917 00:13:42.409541  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m03
	I0917 00:13:42.432198  838391 out.go:179] * Found network options:
	I0917 00:13:42.433393  838391 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0917 00:13:42.434713  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:13:42.434749  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:13:42.434778  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:13:42.434796  838391 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:13:42.434873  838391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:13:42.434927  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:42.434964  838391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:13:42.435037  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m03
	I0917 00:13:42.456445  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:42.457637  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33584 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m03/id_rsa Username:docker}
	I0917 00:13:42.649452  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:13:42.669255  838391 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:13:42.669336  838391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:13:42.678466  838391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:13:42.678490  838391 start.go:495] detecting cgroup driver to use...
	I0917 00:13:42.678537  838391 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:13:42.678593  838391 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:13:42.694034  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:13:42.706095  838391 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:13:42.706148  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:13:42.720214  838391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:13:42.731568  838391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:13:42.844067  838391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:13:42.990517  838391 docker.go:234] disabling docker service ...
	I0917 00:13:42.990597  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:13:43.009784  838391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:13:43.025954  838391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:13:43.175561  838391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:13:43.288802  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:13:43.302127  838391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:13:43.320551  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:13:43.330880  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:13:43.341008  838391 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:13:43.341063  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:13:43.351160  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:43.361609  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:13:43.371882  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:13:43.382351  838391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:13:43.391804  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:13:43.401909  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:13:43.413802  838391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:13:43.424357  838391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:13:43.433387  838391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:13:43.442035  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:43.556953  838391 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:13:43.771383  838391 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:13:43.771487  838391 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:13:43.776031  838391 start.go:563] Will wait 60s for crictl version
	I0917 00:13:43.776089  838391 ssh_runner.go:195] Run: which crictl
	I0917 00:13:43.779581  838391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:13:43.819843  838391 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:13:43.819918  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:43.856395  838391 ssh_runner.go:195] Run: containerd --version
	I0917 00:13:43.887208  838391 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:13:43.888621  838391 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:13:43.889813  838391 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0917 00:13:43.890984  838391 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:13:43.910830  838391 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:13:43.915764  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:43.928519  838391 mustload.go:65] Loading cluster: ha-472903
	I0917 00:13:43.928713  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:43.928903  838391 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:13:43.947488  838391 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:13:43.947756  838391 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.4
	I0917 00:13:43.947768  838391 certs.go:194] generating shared ca certs ...
	I0917 00:13:43.947788  838391 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:13:43.947924  838391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:13:43.947984  838391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:13:43.947997  838391 certs.go:256] generating profile certs ...
	I0917 00:13:43.948089  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0917 00:13:43.948160  838391 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.14b885b8
	I0917 00:13:43.948220  838391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0917 00:13:43.948236  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:13:43.948257  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:13:43.948274  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:13:43.948291  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:13:43.948305  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:13:43.948322  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:13:43.948341  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:13:43.948359  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:13:43.948448  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:13:43.948497  838391 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:13:43.948514  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:13:43.948542  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:13:43.948574  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:13:43.948605  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:13:43.948679  838391 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:13:43.948730  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:43.948750  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0917 00:13:43.948766  838391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0917 00:13:43.948828  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:13:43.966378  838391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:13:44.054709  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:13:44.058781  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:13:44.071805  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:13:44.075707  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:13:44.088751  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:13:44.092347  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:13:44.104909  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:13:44.108527  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:13:44.121249  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:13:44.124730  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:13:44.137128  838391 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:13:44.140545  838391 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 00:13:44.153313  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:13:44.178995  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:13:44.203321  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:13:44.228724  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:13:44.253672  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:13:44.277964  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:13:44.302441  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:13:44.326350  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:13:44.351539  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:13:44.376666  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:13:44.404677  838391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:13:44.431366  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:13:44.450278  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:13:44.468513  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:13:44.486743  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:13:44.504987  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:13:44.524143  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 00:13:44.542282  838391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:13:44.563055  838391 ssh_runner.go:195] Run: openssl version
	I0917 00:13:44.569331  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:13:44.580250  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:44.584080  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:44.584138  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:13:44.591070  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:13:44.600282  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:13:44.610104  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:13:44.613726  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:13:44.613768  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:13:44.620611  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:13:44.629788  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:13:44.639483  838391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:13:44.643062  838391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:13:44.643110  838391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:13:44.650489  838391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:13:44.659935  838391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:13:44.663514  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:13:44.669906  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:13:44.676511  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:13:44.682889  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:13:44.689353  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:13:44.695631  838391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:13:44.702340  838391 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0917 00:13:44.702470  838391 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:13:44.702498  838391 kube-vip.go:115] generating kube-vip config ...
	I0917 00:13:44.702533  838391 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:13:44.715980  838391 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:13:44.716039  838391 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:13:44.716091  838391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:13:44.725480  838391 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:13:44.725529  838391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:13:44.734323  838391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 00:13:44.753458  838391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:13:44.773199  838391 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:13:44.791551  838391 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:13:44.795163  838391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:13:44.806641  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:44.919558  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:44.932561  838391 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:13:44.932786  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:44.934564  838391 out.go:179] * Verifying Kubernetes components...
	I0917 00:13:44.935745  838391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:13:45.049795  838391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:13:45.064166  838391 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:13:45.064235  838391 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:13:45.064458  838391 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m03" to be "Ready" ...
	I0917 00:13:45.067494  838391 node_ready.go:49] node "ha-472903-m03" is "Ready"
	I0917 00:13:45.067523  838391 node_ready.go:38] duration metric: took 3.046711ms for node "ha-472903-m03" to be "Ready" ...
	I0917 00:13:45.067540  838391 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:13:45.067600  838391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:13:45.078867  838391 api_server.go:72] duration metric: took 146.25055ms to wait for apiserver process to appear ...
	I0917 00:13:45.078891  838391 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:13:45.078908  838391 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:13:45.084241  838391 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:13:45.085084  838391 api_server.go:141] control plane version: v1.34.0
	I0917 00:13:45.085104  838391 api_server.go:131] duration metric: took 6.207355ms to wait for apiserver health ...
	I0917 00:13:45.085112  838391 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:13:45.090968  838391 system_pods.go:59] 24 kube-system pods found
	I0917 00:13:45.091001  838391 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:45.091023  838391 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:45.091035  838391 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0917 00:13:45.091045  838391 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0917 00:13:45.091053  838391 system_pods.go:61] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:45.091060  838391 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:13:45.091064  838391 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0917 00:13:45.091070  838391 system_pods.go:61] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:13:45.091076  838391 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.091088  838391 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.091100  838391 system_pods.go:61] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.091109  838391 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0917 00:13:45.091115  838391 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0917 00:13:45.091127  838391 system_pods.go:61] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:45.091135  838391 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0917 00:13:45.091141  838391 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:13:45.091152  838391 system_pods.go:61] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:13:45.091159  838391 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:45.091164  838391 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0917 00:13:45.091177  838391 system_pods.go:61] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:45.091187  838391 system_pods.go:61] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:13:45.091196  838391 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:13:45.091200  838391 system_pods.go:61] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:13:45.091208  838391 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:13:45.091216  838391 system_pods.go:74] duration metric: took 6.096009ms to wait for pod list to return data ...
	I0917 00:13:45.091227  838391 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:13:45.093796  838391 default_sa.go:45] found service account: "default"
	I0917 00:13:45.093813  838391 default_sa.go:55] duration metric: took 2.577656ms for default service account to be created ...
	I0917 00:13:45.093820  838391 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:13:45.099455  838391 system_pods.go:86] 24 kube-system pods found
	I0917 00:13:45.099490  838391 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:45.099501  838391 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:13:45.099507  838391 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running
	I0917 00:13:45.099511  838391 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running
	I0917 00:13:45.099518  838391 system_pods.go:89] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:13:45.099540  838391 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:13:45.099551  838391 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running
	I0917 00:13:45.099556  838391 system_pods.go:89] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:13:45.099563  838391 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.099578  838391 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.099589  838391 system_pods.go:89] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:13:45.099596  838391 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running
	I0917 00:13:45.099601  838391 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running
	I0917 00:13:45.099614  838391 system_pods.go:89] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:13:45.099624  838391 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running
	I0917 00:13:45.099632  838391 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:13:45.099639  838391 system_pods.go:89] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:13:45.099649  838391 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:45.099657  838391 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running
	I0917 00:13:45.099665  838391 system_pods.go:89] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:13:45.099678  838391 system_pods.go:89] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:13:45.099682  838391 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:13:45.099688  838391 system_pods.go:89] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:13:45.099693  838391 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:13:45.099701  838391 system_pods.go:126] duration metric: took 5.874708ms to wait for k8s-apps to be running ...
	I0917 00:13:45.099714  838391 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:13:45.099765  838391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:13:45.111785  838391 system_svc.go:56] duration metric: took 12.061761ms WaitForService to wait for kubelet
	I0917 00:13:45.111811  838391 kubeadm.go:578] duration metric: took 179.201567ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:13:45.111829  838391 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:13:45.115075  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:45.115095  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:45.115109  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:45.115114  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:45.115118  838391 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:13:45.115124  838391 node_conditions.go:123] node cpu capacity is 8
	I0917 00:13:45.115130  838391 node_conditions.go:105] duration metric: took 3.295987ms to run NodePressure ...
	I0917 00:13:45.115145  838391 start.go:241] waiting for startup goroutines ...
	I0917 00:13:45.115177  838391 start.go:255] writing updated cluster config ...
	I0917 00:13:45.116870  838391 out.go:203] 
	I0917 00:13:45.117967  838391 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:13:45.118090  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:45.119494  838391 out.go:179] * Starting "ha-472903-m04" worker node in "ha-472903" cluster
	I0917 00:13:45.120460  838391 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:13:45.121518  838391 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:13:45.122495  838391 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:13:45.122511  838391 cache.go:58] Caching tarball of preloaded images
	I0917 00:13:45.122563  838391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:13:45.122595  838391 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:13:45.122603  838391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:13:45.122694  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:45.143478  838391 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:13:45.143500  838391 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:13:45.143517  838391 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:13:45.143550  838391 start.go:360] acquireMachinesLock for ha-472903-m04: {Name:mkdbbd0d5b3cd7ad4b13d37f2d562d6d6421c5de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:13:45.143618  838391 start.go:364] duration metric: took 45.935µs to acquireMachinesLock for "ha-472903-m04"
	I0917 00:13:45.143643  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:13:45.143650  838391 fix.go:54] fixHost starting: m04
	I0917 00:13:45.143945  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:13:45.161874  838391 fix.go:112] recreateIfNeeded on ha-472903-m04: state=Stopped err=<nil>
	W0917 00:13:45.161907  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:13:45.163684  838391 out.go:252] * Restarting existing docker container for "ha-472903-m04" ...
	I0917 00:13:45.163768  838391 cli_runner.go:164] Run: docker start ha-472903-m04
	I0917 00:13:45.414854  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:13:45.433545  838391 kic.go:430] container "ha-472903-m04" state is running.
	I0917 00:13:45.433944  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m04
	I0917 00:13:45.452344  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:13:45.452626  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:13:45.452705  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	I0917 00:13:45.471203  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:13:45.471486  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33589 <nil> <nil>}
	I0917 00:13:45.471509  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:13:45.472182  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55516->127.0.0.1:33589: read: connection reset by peer
	I0917 00:13:48.473360  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:13:51.474441  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:13:54.475694  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:13:57.476729  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:00.477687  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:03.477978  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:06.479736  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:09.480885  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:12.482720  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:15.483800  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:18.484741  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:21.485809  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:24.487156  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:27.488676  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:30.489805  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:33.490276  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:36.491714  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:39.492658  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:42.493967  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:45.494632  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:48.495764  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:51.496767  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:54.497734  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:14:57.499659  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:00.500675  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:03.501862  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:06.503834  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:09.505079  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:12.507641  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:15.508761  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:18.509736  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:21.510672  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:24.512280  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:27.514552  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:30.515709  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:33.516144  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:36.518405  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:39.519733  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:42.521625  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:45.522451  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:48.523249  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:51.524945  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:54.525931  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:15:57.527643  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:00.528649  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:03.529267  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:06.531578  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:09.532530  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:12.534632  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:15.537051  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:18.537304  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:21.538664  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:24.539680  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:27.541681  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:30.542852  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:33.543744  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:36.544245  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:39.544518  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:42.546746  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33589: connect: connection refused
	I0917 00:16:45.548509  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:16:45.548571  838391 ubuntu.go:182] provisioning hostname "ha-472903-m04"
	I0917 00:16:45.548664  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:45.567482  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:45.567574  838391 machine.go:96] duration metric: took 3m0.114930329s to provisionDockerMachine
	I0917 00:16:45.567666  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:16:45.567704  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:45.586204  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:45.586381  838391 retry.go:31] will retry after 243.120334ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:45.829742  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:45.848018  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:45.848165  838391 retry.go:31] will retry after 204.404017ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:46.053620  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:46.071508  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:46.071648  838391 retry.go:31] will retry after 637.92377ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:46.710530  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:46.728463  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:16:46.728598  838391 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:16:46.728620  838391 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:46.728676  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:16:46.728722  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:46.746202  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:46.746328  838391 retry.go:31] will retry after 328.494131ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:47.075622  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:47.094084  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:47.094205  838391 retry.go:31] will retry after 397.703456ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:47.492843  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:47.511608  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:16:47.511709  838391 retry.go:31] will retry after 759.296258ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:48.271608  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:16:48.289666  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:16:48.289812  838391 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:16:48.289830  838391 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:48.289844  838391 fix.go:56] duration metric: took 3m3.146193546s for fixHost
	I0917 00:16:48.289858  838391 start.go:83] releasing machines lock for "ha-472903-m04", held for 3m3.146226948s
	W0917 00:16:48.289881  838391 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:16:48.289975  838391 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:16:48.289987  838391 start.go:729] Will try again in 5 seconds ...
	I0917 00:16:53.290141  838391 start.go:360] acquireMachinesLock for ha-472903-m04: {Name:mkdbbd0d5b3cd7ad4b13d37f2d562d6d6421c5de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:16:53.290272  838391 start.go:364] duration metric: took 94.983µs to acquireMachinesLock for "ha-472903-m04"
	I0917 00:16:53.290297  838391 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:16:53.290303  838391 fix.go:54] fixHost starting: m04
	I0917 00:16:53.290646  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:16:53.309611  838391 fix.go:112] recreateIfNeeded on ha-472903-m04: state=Stopped err=<nil>
	W0917 00:16:53.309640  838391 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:16:53.311233  838391 out.go:252] * Restarting existing docker container for "ha-472903-m04" ...
	I0917 00:16:53.311300  838391 cli_runner.go:164] Run: docker start ha-472903-m04
	I0917 00:16:53.541222  838391 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:16:53.560095  838391 kic.go:430] container "ha-472903-m04" state is running.
	I0917 00:16:53.560573  838391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m04
	I0917 00:16:53.580208  838391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:16:53.580538  838391 machine.go:93] provisionDockerMachine start ...
	I0917 00:16:53.580642  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	I0917 00:16:53.599573  838391 main.go:141] libmachine: Using SSH client type: native
	I0917 00:16:53.599853  838391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33594 <nil> <nil>}
	I0917 00:16:53.599867  838391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:16:53.600481  838391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36824->127.0.0.1:33594: read: connection reset by peer
	I0917 00:16:56.602700  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:16:59.603638  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:02.605644  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:05.607721  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:08.608037  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:11.609632  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:14.610658  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:17.612855  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:20.613697  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:23.614397  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:26.616706  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:29.617175  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:32.618651  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:35.620635  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:38.621502  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:41.622948  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:44.624290  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:47.624933  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:50.625690  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:53.626092  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:56.628195  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:17:59.629019  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:02.631303  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:05.632822  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:08.633316  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:11.635679  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:14.636798  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:17.638657  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:20.639654  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:23.640721  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:26.642651  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:29.643601  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:32.645639  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:35.647624  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:38.648379  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:41.650676  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:44.651634  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:47.653582  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:50.654648  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:53.655970  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:56.658210  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:18:59.658941  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:02.661113  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:05.663405  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:08.664478  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:11.666153  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:14.667567  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:17.668447  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:20.668923  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:23.669615  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:26.671877  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:29.673145  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:32.674637  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:35.677064  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:38.678152  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:41.680118  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:44.681450  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:47.682442  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:50.682884  838391 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33594: connect: connection refused
	I0917 00:19:53.683789  838391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:19:53.683836  838391 ubuntu.go:182] provisioning hostname "ha-472903-m04"
	I0917 00:19:53.683924  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:53.702821  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:53.702901  838391 machine.go:96] duration metric: took 3m0.122343923s to provisionDockerMachine
	I0917 00:19:53.702985  838391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:19:53.703018  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:53.720196  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:53.720349  838391 retry.go:31] will retry after 273.264226ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:53.994608  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:54.012758  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:54.012877  838391 retry.go:31] will retry after 451.557634ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:54.465611  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:54.483957  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:54.484069  838391 retry.go:31] will retry after 372.513327ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:54.857680  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:54.875097  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:19:54.875215  838391 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:19:54.875229  838391 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:54.875274  838391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:19:54.875305  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:54.892677  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:54.892775  838391 retry.go:31] will retry after 244.26035ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:55.137223  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:55.156010  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:55.156141  838391 retry.go:31] will retry after 195.694179ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:55.352609  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:55.370515  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:55.370623  838391 retry.go:31] will retry after 349.362306ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:55.720142  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:55.737839  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	I0917 00:19:55.737968  838391 retry.go:31] will retry after 818.87418ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:56.557986  838391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	W0917 00:19:56.575881  838391 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04 returned with exit code 1
	W0917 00:19:56.576024  838391 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:19:56.576041  838391 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:56.576050  838391 fix.go:56] duration metric: took 3m3.285747581s for fixHost
	I0917 00:19:56.576057  838391 start.go:83] releasing machines lock for "ha-472903-m04", held for 3m3.285773333s
	W0917 00:19:56.576146  838391 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-472903" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:19:56.578148  838391 out.go:203] 
	W0917 00:19:56.579015  838391 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:19:56.579029  838391 out.go:285] * 
	W0917 00:19:56.580824  838391 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:19:56.581780  838391 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c8a737e1be33c       6e38f40d628db       5 minutes ago       Running             storage-provisioner       4                   fe7a407d2eb97       storage-provisioner
	2a56abb41f49d       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   2c028f64de7ca       kindnet-lh7dv
	b4ccada04ba90       8c811b4aec35f       6 minutes ago       Running             busybox                   1                   8196f32c07b91       busybox-7b57f96db7-6hrm6
	aeea8f1127caf       52546a367cc9e       6 minutes ago       Running             coredns                   1                   91d98fd766ced       coredns-66bc5c9577-qn8m7
	9fc46931c7aae       52546a367cc9e       6 minutes ago       Running             coredns                   1                   5e2ab87af7d54       coredns-66bc5c9577-c94hz
	360a9ae449a3a       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       3                   fe7a407d2eb97       storage-provisioner
	b1c8344888d7d       df0860106674d       6 minutes ago       Running             kube-proxy                1                   b64b7dfe57cfc       kube-proxy-d4m8f
	6ce9c5e712887       765655ea60781       6 minutes ago       Running             kube-vip                  0                   1bc9d50f267a3       kube-vip-ha-472903
	9685cc588651c       46169d968e920       6 minutes ago       Running             kube-scheduler            1                   50f4cca94a4f8       kube-scheduler-ha-472903
	c3f8ee22fca28       a0af72f2ec6d6       6 minutes ago       Running             kube-controller-manager   1                   811d527e0af1e       kube-controller-manager-ha-472903
	96d46a46d9093       90550c43ad2bc       6 minutes ago       Running             kube-apiserver            1                   9fcac3d988698       kube-apiserver-ha-472903
	90b187ed887fa       5f1f5298c888d       6 minutes ago       Running             etcd                      1                   070db27b7a5dd       etcd-ha-472903
	0a41d8b587e02       8c811b4aec35f       21 minutes ago      Exited              busybox                   0                   a2422ee3e6e6d       busybox-7b57f96db7-6hrm6
	9f103b05d2d6f       52546a367cc9e       22 minutes ago      Exited              coredns                   0                   9579263342827       coredns-66bc5c9577-c94hz
	3b457407f10e3       52546a367cc9e       22 minutes ago      Exited              coredns                   0                   290cfb537788e       coredns-66bc5c9577-qn8m7
	cc69d2451cb65       409467f978b4a       23 minutes ago      Exited              kindnet-cni               0                   3e17d6ae9b2a6       kindnet-lh7dv
	92dd4d116eb03       df0860106674d       23 minutes ago      Exited              kube-proxy                0                   8c0ecd5301326       kube-proxy-d4m8f
	bba28cace6502       46169d968e920       23 minutes ago      Exited              kube-scheduler            0                   f18dd7697c60f       kube-scheduler-ha-472903
	087290a41f59c       a0af72f2ec6d6       23 minutes ago      Exited              kube-controller-manager   0                   0760ebe1d2a56       kube-controller-manager-ha-472903
	0aba62132d764       90550c43ad2bc       23 minutes ago      Exited              kube-apiserver            0                   8ad1fa8bc0267       kube-apiserver-ha-472903
	23c0af0bdbe95       5f1f5298c888d       23 minutes ago      Exited              etcd                      0                   b01a62742caec       etcd-ha-472903
	
	
	==> containerd <==
	Sep 17 00:14:06 ha-472903 containerd[478]: time="2025-09-17T00:14:06.742622145Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 17 00:14:07 ha-472903 containerd[478]: time="2025-09-17T00:14:07.230475449Z" level=info msg="RemoveContainer for \"5a5a17cca6c0a643b6c0881dab5508dcb7de8e6ad77d7e6ecb81d434ab2cc8a1\""
	Sep 17 00:14:07 ha-472903 containerd[478]: time="2025-09-17T00:14:07.235120578Z" level=info msg="RemoveContainer for \"5a5a17cca6c0a643b6c0881dab5508dcb7de8e6ad77d7e6ecb81d434ab2cc8a1\" returns successfully"
	Sep 17 00:14:20 ha-472903 containerd[478]: time="2025-09-17T00:14:20.057131193Z" level=info msg="CreateContainer within sandbox \"fe7a407d2eb97d648dbca1e85a5587efe15f437488c6dd3ef99c90d4b44796b2\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:4,}"
	Sep 17 00:14:20 ha-472903 containerd[478]: time="2025-09-17T00:14:20.067657299Z" level=info msg="CreateContainer within sandbox \"fe7a407d2eb97d648dbca1e85a5587efe15f437488c6dd3ef99c90d4b44796b2\" for &ContainerMetadata{Name:storage-provisioner,Attempt:4,} returns container id \"c8a737e1be33c6e4b6e17f5359483d22d3eeb7ca2497546109c1097eb9343a7f\""
	Sep 17 00:14:20 ha-472903 containerd[478]: time="2025-09-17T00:14:20.068219427Z" level=info msg="StartContainer for \"c8a737e1be33c6e4b6e17f5359483d22d3eeb7ca2497546109c1097eb9343a7f\""
	Sep 17 00:14:20 ha-472903 containerd[478]: time="2025-09-17T00:14:20.127854739Z" level=info msg="StartContainer for \"c8a737e1be33c6e4b6e17f5359483d22d3eeb7ca2497546109c1097eb9343a7f\" returns successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.048175952Z" level=info msg="RemoveContainer for \"8683544e2a9d579448e28b8f33653e2c8d1315b2d07bd7b4ce574428d93c6f3a\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.051943763Z" level=info msg="RemoveContainer for \"8683544e2a9d579448e28b8f33653e2c8d1315b2d07bd7b4ce574428d93c6f3a\" returns successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.053740259Z" level=info msg="StopPodSandbox for \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.053865890Z" level=info msg="TearDown network for sandbox \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\" successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.053890854Z" level=info msg="StopPodSandbox for \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\" returns successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.054466776Z" level=info msg="RemovePodSandbox for \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.054510253Z" level=info msg="Forcibly stopping sandbox \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.054597568Z" level=info msg="TearDown network for sandbox \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\" successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.058233686Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.058306846Z" level=info msg="RemovePodSandbox \"4c425da29992d5b1e2fc336043685d4cf113ef650aab347de0664ccbbee0de50\" returns successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.058804033Z" level=info msg="StopPodSandbox for \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.058920078Z" level=info msg="TearDown network for sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.058947458Z" level=info msg="StopPodSandbox for \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" returns successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.059233694Z" level=info msg="RemovePodSandbox for \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.059274964Z" level=info msg="Forcibly stopping sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\""
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.059351772Z" level=info msg="TearDown network for sandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" successfully"
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.062137499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 17 00:14:29 ha-472903 containerd[478]: time="2025-09-17T00:14:29.062200412Z" level=info msg="RemovePodSandbox \"1c0713f862ea047ef39e7ae39aea7b7769255565bbf61da2859ac341b5b32bca\" returns successfully"
	
	
	==> coredns [3b457407f10e357ce33da7fa3fb4333f8312f0d3e3570cf8528cdcac8f5a1d0f] <==
	[INFO] 10.244.1.2:53799 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.009949044s
	[INFO] 10.244.0.4:39485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157098s
	[INFO] 10.244.0.4:57871 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000750185s
	[INFO] 10.244.0.4:53410 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000089028s
	[INFO] 10.244.1.2:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150317s
	[INFO] 10.244.1.2:59346 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028128363s
	[INFO] 10.244.1.2:43091 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01004668s
	[INFO] 10.244.1.2:37227 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000191819s
	[INFO] 10.244.1.2:40079 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125376s
	[INFO] 10.244.0.4:38168 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181114s
	[INFO] 10.244.0.4:60067 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000087147s
	[INFO] 10.244.0.4:47611 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122939s
	[INFO] 10.244.0.4:37626 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121195s
	[INFO] 10.244.1.2:42817 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159509s
	[INFO] 10.244.1.2:33910 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186538s
	[INFO] 10.244.1.2:37929 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109836s
	[INFO] 10.244.0.4:50698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212263s
	[INFO] 10.244.0.4:33166 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100167s
	[INFO] 10.244.1.2:50377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157558s
	[INFO] 10.244.1.2:39491 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132025s
	[INFO] 10.244.1.2:50075 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112028s
	[INFO] 10.244.0.4:58743 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149175s
	[INFO] 10.244.0.4:52796 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114946s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9f103b05d2d6fd9df1ffca0135173363251e58587aa3f9093200d96a7302d315] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45239 - 14115 "HINFO IN 5883645869461503498.3950535614037284853. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058516241s
	[INFO] 10.244.1.2:55352 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003252862s
	[INFO] 10.244.0.4:33650 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001640931s
	[INFO] 10.244.0.4:50077 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000621363s
	[INFO] 10.244.1.2:48439 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189187s
	[INFO] 10.244.1.2:39582 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151327s
	[INFO] 10.244.1.2:59539 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140715s
	[INFO] 10.244.0.4:42999 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177514s
	[INFO] 10.244.0.4:36769 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010694753s
	[INFO] 10.244.0.4:53074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158932s
	[INFO] 10.244.0.4:57223 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012213s
	[INFO] 10.244.1.2:50810 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176678s
	[INFO] 10.244.0.4:58045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142445s
	[INFO] 10.244.0.4:39777 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123555s
	[INFO] 10.244.1.2:59022 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148853s
	[INFO] 10.244.0.4:45136 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001657s
	[INFO] 10.244.0.4:37711 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134332s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9fc46931c7aae5fea2058b723439b03184beee352ff9a7efcf262818181a635d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60683 - 9436 "HINFO IN 7751308179169184926.6829077423459472962. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019258685s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [aeea8f1127caf7117ade119a9e492104789925a531209d0aba3022cd18cb7ce1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40200 - 1569 "HINFO IN 6158707635578374570.8737516254824064952. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.057247461s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-472903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_56_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:56:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:20:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:17:29 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:17:29 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:17:29 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:17:29 +0000   Tue, 16 Sep 2025 23:56:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-472903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 e92083047f3148b2867b7885ff1f4fb4
	  System UUID:                695af4c7-28fb-4299-9454-75db3262ca2c
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6hrm6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-66bc5c9577-c94hz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23m
	  kube-system                 coredns-66bc5c9577-qn8m7             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23m
	  kube-system                 etcd-ha-472903                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         23m
	  kube-system                 kindnet-lh7dv                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23m
	  kube-system                 kube-apiserver-ha-472903             250m (3%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-472903    200m (2%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-d4m8f                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-472903             100m (1%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-472903                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m32s                  kube-proxy       
	  Normal  Starting                 23m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)      kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  Starting                 23m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)      kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)      kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 23m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     23m                    kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                    kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                    kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           23m                    node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           8m13s                  node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  Starting                 6m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m40s (x8 over 6m40s)  kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m40s (x8 over 6m40s)  kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m40s (x7 over 6m40s)  kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m31s                  node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           6m31s                  node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           6m26s                  node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	
	
	Name:               ha-472903-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:20:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:18:52 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:18:52 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:18:52 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:18:52 +0000   Tue, 16 Sep 2025 23:57:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-472903-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 0e1a1fb76ba244e2b9677af4de050ca0
	  System UUID:                85df9db8-f21a-4038-9f8c-4cc1d81dc0d5
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-4jfjt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 etcd-ha-472903-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         22m
	  kube-system                 kindnet-q7c7s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22m
	  kube-system                 kube-apiserver-ha-472903-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-ha-472903-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-58lkb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-ha-472903-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-vip-ha-472903-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 22m                    kube-proxy       
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  NodeAllocatableEnforced  8m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m19s (x7 over 8m19s)  kubelet          Node ha-472903-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m19s (x8 over 8m19s)  kubelet          Node ha-472903-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m19s (x8 over 8m19s)  kubelet          Node ha-472903-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m19s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m13s                  node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  Starting                 6m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m38s (x8 over 6m38s)  kubelet          Node ha-472903-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m38s (x8 over 6m38s)  kubelet          Node ha-472903-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m38s (x7 over 6m38s)  kubelet          Node ha-472903-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m31s                  node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           6m31s                  node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           6m26s                  node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 e8 75 4b 01 57 08 06
	[  +0.025562] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[ +13.150028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 5c f0 26 cd ba 08 06
	[  +0.000341] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 20 90 fb f5 d8 08 06
	[ +28.639349] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 26 63 8d db 90 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[  +0.836892] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 cc 9b 52 38 94 08 06
	[  +0.080327] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	[Sep16 23:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[ +20.325550] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 39 4b 41 df 63 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[  +8.925776] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e cd c1 f7 dc c8 08 06
	[  +0.000373] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	
	
	==> etcd [23c0af0bdbe9526d53769461ed9f80d8c743b02e625b65cce39c888f5e7d4b4e] <==
	{"level":"warn","ts":"2025-09-17T00:13:08.865242Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017673679127,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-09-17T00:13:09.078092Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:09.078216Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:09.078269Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4061] sent MsgPreVote request to 3aa85cdcd5e5557b at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:09.078323Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4061] sent MsgPreVote request to ab9d0391dce79465 at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:09.078391Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:09.078467Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-09-17T00:13:09.366348Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017673679127,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-17T00:13:09.733983Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"2.00012067s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context deadline exceeded"}
	{"level":"info","ts":"2025-09-17T00:13:09.734106Z","caller":"traceutil/trace.go:172","msg":"trace[1703373101] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"2.000255365s","start":"2025-09-17T00:13:07.733837Z","end":"2025-09-17T00:13:09.734092Z","steps":["trace[1703373101] 'agreement among raft nodes before linearized reading'  (duration: 2.000119103s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:13:09.734220Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:13:07.733823Z","time spent":"2.000381887s","remote":"127.0.0.1:56470","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-09-17T00:13:09.824490Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"10.001550907s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-09-17T00:13:09.830580Z","caller":"traceutil/trace.go:172","msg":"trace[2000130708] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"10.007653802s","start":"2025-09-17T00:12:59.822907Z","end":"2025-09-17T00:13:09.830561Z","steps":["trace[2000130708] 'agreement among raft nodes before linearized reading'  (duration: 10.001549225s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:13:09.830689Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:12:59.822890Z","time spent":"10.007768318s","remote":"127.0.0.1:56876","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	2025/09/17 00:13:09 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-09-17T00:13:09.866876Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017673679127,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-17T00:13:10.366968Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017673679127,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-09-17T00:13:10.478109Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:10.478171Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:10.478195Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4061] sent MsgPreVote request to 3aa85cdcd5e5557b at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:10.478218Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4061] sent MsgPreVote request to ab9d0391dce79465 at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:10.478252Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-09-17T00:13:10.478278Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-09-17T00:13:10.720561Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:13:03.715662Z","time spent":"7.004893477s","remote":"127.0.0.1:56646","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2025-09-17T00:13:10.867073Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017673679127,"retry-timeout":"500ms"}
	
	
	==> etcd [90b187ed887fae063d0e3d6e7f9316abbc50f1e7b9c092596b43a1c43c86e79d] <==
	{"level":"info","ts":"2025-09-17T00:13:39.653688Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:13:39.662722Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:13:39.663230Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-17T00:13:39.862686Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ab9d0391dce79465","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:13:39.862713Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ab9d0391dce79465","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:20:00.817249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:43226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:00.833811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:43246","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:20:00.852159Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(4226730353838347643 12593026477526642892)"}
	{"level":"info","ts":"2025-09-17T00:20:00.853090Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"ab9d0391dce79465","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-17T00:20:00.853131Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-17T00:20:00.853242Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:20:00.853277Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-17T00:20:00.853321Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:20:00.853330Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:20:00.853433Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-17T00:20:00.853567Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","error":"context canceled"}
	{"level":"warn","ts":"2025-09-17T00:20:00.853601Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"ab9d0391dce79465","error":"failed to read ab9d0391dce79465 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-09-17T00:20:00.853755Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-17T00:20:00.853930Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465","error":"context canceled"}
	{"level":"info","ts":"2025-09-17T00:20:00.853965Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:20:00.853994Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:20:00.854008Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"ab9d0391dce79465"}
	{"level":"info","ts":"2025-09-17T00:20:00.854032Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-17T00:20:00.860309Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"ab9d0391dce79465"}
	{"level":"warn","ts":"2025-09-17T00:20:00.861564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:57216","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:20:09 up  3:02,  0 users,  load average: 0.61, 0.77, 0.87
	Linux ha-472903 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [2a56abb41f49d6755de68bb41070eee7c07fee5950b2584042a3850228b3c274] <==
	I0917 00:19:27.392548       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:19:27.392752       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:19:27.392765       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:19:37.390063       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:19:37.390101       1 main.go:301] handling current node
	I0917 00:19:37.390118       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:19:37.390123       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:19:37.390327       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:19:37.390339       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:19:47.397482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:19:47.397526       1 main.go:301] handling current node
	I0917 00:19:47.397543       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:19:47.397548       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:19:47.397996       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:19:47.398026       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:19:57.390658       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:19:57.390704       1 main.go:301] handling current node
	I0917 00:19:57.390723       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:19:57.390729       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:19:57.390896       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:19:57.391108       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:20:07.391508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:20:07.391558       1 main.go:301] handling current node
	I0917 00:20:07.391577       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:20:07.391584       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [cc69d2451cb65860b5bc78e027be2fc1cb0f9fa6542b4abe3bc1ff1c90a8fe60] <==
	I0917 00:12:27.503889       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:12:37.507295       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:12:37.507338       1 main.go:301] handling current node
	I0917 00:12:37.507353       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:12:37.507359       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:12:37.507565       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:12:37.507578       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:12:47.503578       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:12:47.503630       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:12:47.503841       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:12:47.503857       1 main.go:301] handling current node
	I0917 00:12:47.503874       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:12:47.503882       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:12:57.503552       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:12:57.503592       1 main.go:301] handling current node
	I0917 00:12:57.503612       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:12:57.503618       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:12:57.504021       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:12:57.504066       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:13:07.510512       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:13:07.510552       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:13:07.511170       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:13:07.511196       1 main.go:301] handling current node
	I0917 00:13:07.511281       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:13:07.511312       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0aba62132d764965d8e1a80a4a6345bb7e34892b23143da4a7af3450cd465d6c] <==
	E0917 00:13:11.166753       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.166775       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.166780       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.166731       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.166754       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.167368       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.167554       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.167606       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.167640       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.167659       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168321       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168332       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168355       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168358       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168761       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168807       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168826       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168844       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168845       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168866       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168873       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.168898       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.169017       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.169052       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:13:11.169077       1 watcher.go:335] watch chan error: etcdserver: no leader
	
	
	==> kube-apiserver [96d46a46d90937e1dc254cbb641e1f12887151faabbe128f2cc51a8a833fe573] <==
	I0917 00:13:35.109530       1 aggregator.go:171] initial CRD sync complete...
	I0917 00:13:35.109558       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 00:13:35.109566       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 00:13:35.109573       1 cache.go:39] Caches are synced for autoregister controller
	W0917 00:13:35.114733       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3 192.168.49.4]
	I0917 00:13:35.116809       1 controller.go:667] quota admission added evaluator for: endpoints
	I0917 00:13:35.117772       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0917 00:13:35.127627       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0917 00:13:35.133999       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0917 00:13:35.156218       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 00:13:35.994627       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0917 00:13:36.160405       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W0917 00:13:36.454299       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I0917 00:13:38.437732       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:13:38.895584       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0917 00:14:14.427245       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0917 00:14:34.638077       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:14:55.389838       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:15:41.589543       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:16:00.249213       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:16:50.539266       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:17:30.019039       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:18:00.900712       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:18:53.314317       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:19:24.721832       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [087290a41f59caa4f9bc89759bcec6cf90f47c8a2ab83b7c671a8fff35641df9] <==
	I0916 23:56:54.728442       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0916 23:56:54.728466       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:56:54.728485       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0916 23:56:54.728644       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0916 23:56:54.728665       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0916 23:56:54.728648       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0916 23:56:54.728914       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0916 23:56:54.730175       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0916 23:56:54.730201       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0916 23:56:54.732432       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:56:54.733452       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:56:54.735655       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:56:54.735714       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:56:54.735760       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:56:54.735767       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:56:54.735772       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:56:54.740680       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903" podCIDRs=["10.244.0.0/24"]
	I0916 23:56:54.749950       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:22.933124       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m02\" does not exist"
	I0916 23:57:22.943785       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:24.681339       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m02"
	I0916 23:57:51.749676       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472903-m03\" does not exist"
	I0916 23:57:51.772476       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472903-m03" podCIDRs=["10.244.2.0/24"]
	E0916 23:57:51.829801       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"3f5da9fc-6769-4ca8-a715-edeace44c646\", ResourceVersion:\"594\", Generation:1, CreationTimestamp:time.Date(2025, time.September, 16, 23, 56, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00222d0e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"
\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSourc
e)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0021ed7c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002fcdcf8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtu
alDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.34.0\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00144a7e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Re
sourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Life
cycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0019549c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001900b18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ba1200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", Tole
rationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e14570)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001900b70)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailab
le:2, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:57:54.685322       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m03"
	
	
	==> kube-controller-manager [c3f8ee22fca28b303f553c3003d1000b80565b4147ba719401c8c5f61921ee41] <==
	I0917 00:13:38.427005       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:13:38.427138       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0917 00:13:38.428331       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0917 00:13:38.431473       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0917 00:13:38.431610       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0917 00:13:38.431764       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0917 00:13:38.431826       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:13:38.431860       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0917 00:13:38.431926       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0917 00:13:38.431992       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0917 00:13:38.432765       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0917 00:13:38.432816       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0917 00:13:38.432831       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:13:38.432867       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0917 00:13:38.432870       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 00:13:38.433430       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0917 00:13:38.433549       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 00:13:38.433648       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903"
	I0917 00:13:38.433689       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m02"
	I0917 00:13:38.433719       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m03"
	I0917 00:13:38.433784       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 00:13:38.434607       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0917 00:13:38.436471       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0917 00:13:38.443120       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:13:38.447017       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	
	
	==> kube-proxy [92dd4d116eb0387dded82fb32d35690ec2d00e3f5e7ac81bf7aea0c6814edd5e] <==
	I0916 23:56:56.831012       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:56:56.891635       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:56:56.991820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:56:56.991862       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:56:56.991952       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:56:57.015955       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:56:57.016001       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:56:57.021120       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:56:57.021457       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:56:57.021499       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:56:57.024872       1 config.go:200] "Starting service config controller"
	I0916 23:56:57.024892       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:56:57.024900       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:56:57.024909       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:56:57.024890       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:56:57.024917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:56:57.024937       1 config.go:309] "Starting node config controller"
	I0916 23:56:57.024942       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:56:57.125608       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:56:57.125691       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0916 23:56:57.125856       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:56:57.125902       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [b1c8344888d7deab1a3203bf9e16eefcb945905ec04b591acfb2fed3104948ec] <==
	I0917 00:13:36.733439       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:13:36.818219       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:13:36.918912       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:13:36.918966       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:13:36.919071       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:13:36.942838       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:13:36.942910       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:13:36.949958       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:13:36.950427       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:13:36.950467       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:13:36.954376       1 config.go:200] "Starting service config controller"
	I0917 00:13:36.954506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:13:36.954587       1 config.go:309] "Starting node config controller"
	I0917 00:13:36.954660       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:13:36.954669       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:13:36.954703       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:13:36.954712       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:13:36.954729       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:13:36.954736       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:13:37.054981       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:13:37.055026       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:13:37.055057       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9685cc588651ced2d51ab783a94533fff6a60971435eaa8e11982eb715ef5350] <==
	I0917 00:13:30.068882       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:13:35.071453       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:13:35.071492       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:13:35.090261       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:13:35.090310       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:13:35.090614       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:13:35.090722       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:13:35.090743       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:13:35.090760       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:13:35.094479       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0917 00:13:35.094536       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0917 00:13:35.190629       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:13:35.191303       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:13:35.194926       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kube-scheduler [bba28cace6502de93aa43db4fb51671581c5074990dea721d98d36d839734a67] <==
	E0916 23:56:48.619869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:56:48.649766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:56:48.673092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I0916 23:56:49.170967       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 23:57:51.780040       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:57:51.780142       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod f2346479-1adb-4bc7-af07-971525be2b05(kube-system/kindnet-x6twd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	E0916 23:57:51.780183       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x6twd\": pod kindnet-x6twd is already assigned to node \"ha-472903-m03\"" logger="UnhandledError" pod="kube-system/kindnet-x6twd"
	I0916 23:57:51.782132       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-x6twd" node="ha-472903-m03"
	E0916 23:58:37.948695       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	E0916 23:58:37.948846       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 565a634f-ab41-4776-ba5d-63a601bfec48(default/busybox-7b57f96db7-x6xc9) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	E0916 23:58:37.948875       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x6xc9\": pod busybox-7b57f96db7-x6xc9 is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-x6xc9"
	I0916 23:58:37.950251       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-x6xc9" node="ha-472903-m02"
	I0916 23:58:37.966099       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="47b06c15-c007-4c50-a248-5411a0f4b6a7" pod="default/busybox-7b57f96db7-4jfjt" assumedNode="ha-472903-m02" currentNode="ha-472903"
	E0916 23:58:37.968241       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903"
	E0916 23:58:37.968351       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 47b06c15-c007-4c50-a248-5411a0f4b6a7(default/busybox-7b57f96db7-4jfjt) was assumed on ha-472903 but assigned to ha-472903-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	E0916 23:58:37.968376       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4jfjt\": pod busybox-7b57f96db7-4jfjt is already assigned to node \"ha-472903-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-4jfjt"
	I0916 23:58:37.969472       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-4jfjt" node="ha-472903-m02"
	E0916 23:58:38.002469       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-wp95z" node="ha-472903"
	E0916 23:58:38.002779       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-wp95z\": pod busybox-7b57f96db7-wp95z is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:38.046394       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-xnrsc\" not found" pod="default/busybox-7b57f96db7-xnrsc"
	E0916 23:58:38.046880       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"busybox-7b57f96db7-wp95z\" not found" pod="default/busybox-7b57f96db7-wp95z"
	E0916 23:58:40.050124       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	E0916 23:58:40.050213       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod bd03bad4-af1e-42d0-81fb-6fcaeaa8775e(default/busybox-7b57f96db7-6hrm6) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	E0916 23:58:40.050248       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-6hrm6\": pod busybox-7b57f96db7-6hrm6 is already assigned to node \"ha-472903\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-6hrm6"
	I0916 23:58:40.051853       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-6hrm6" node="ha-472903"
	
	
	==> kubelet <==
	Sep 17 00:13:35 ha-472903 kubelet[620]: I0917 00:13:35.179855     620 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-472903"
	Sep 17 00:13:35 ha-472903 kubelet[620]: E0917 00:13:35.187290     620 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-472903\" already exists" pod="kube-system/etcd-ha-472903"
	Sep 17 00:13:35 ha-472903 kubelet[620]: I0917 00:13:35.187325     620 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-472903"
	Sep 17 00:13:35 ha-472903 kubelet[620]: E0917 00:13:35.196172     620 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-472903\" already exists" pod="kube-system/kube-apiserver-ha-472903"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.029595     620 apiserver.go:52] "Watching apiserver"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.036032     620 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-472903" podUID="ccdab212-cf0c-4bf0-958b-173e1008f7bc"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.052303     620 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-472903"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.052325     620 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-472903"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.131204     620 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.137227     620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-472903" podStartSLOduration=0.137196984 podStartE2EDuration="137.196984ms" podCreationTimestamp="2025-09-17 00:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-17 00:13:36.118811818 +0000 UTC m=+7.151916686" watchObservedRunningTime="2025-09-17 00:13:36.137196984 +0000 UTC m=+7.170301850"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.155169     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4a70eec-48a7-4ea6-871a-1b5ed2beca9a-xtables-lock\") pod \"kube-proxy-d4m8f\" (UID: \"d4a70eec-48a7-4ea6-871a-1b5ed2beca9a\") " pod="kube-system/kube-proxy-d4m8f"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.156175     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1da43ca7-9af7-4573-9cdc-fd21b098ca2c-cni-cfg\") pod \"kindnet-lh7dv\" (UID: \"1da43ca7-9af7-4573-9cdc-fd21b098ca2c\") " pod="kube-system/kindnet-lh7dv"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.156592     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1da43ca7-9af7-4573-9cdc-fd21b098ca2c-lib-modules\") pod \"kindnet-lh7dv\" (UID: \"1da43ca7-9af7-4573-9cdc-fd21b098ca2c\") " pod="kube-system/kindnet-lh7dv"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.156960     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4a70eec-48a7-4ea6-871a-1b5ed2beca9a-lib-modules\") pod \"kube-proxy-d4m8f\" (UID: \"d4a70eec-48a7-4ea6-871a-1b5ed2beca9a\") " pod="kube-system/kube-proxy-d4m8f"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.157372     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1da43ca7-9af7-4573-9cdc-fd21b098ca2c-xtables-lock\") pod \"kindnet-lh7dv\" (UID: \"1da43ca7-9af7-4573-9cdc-fd21b098ca2c\") " pod="kube-system/kindnet-lh7dv"
	Sep 17 00:13:36 ha-472903 kubelet[620]: I0917 00:13:36.157474     620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ac7f283e-4d28-46cf-a519-bd227237d5e7-tmp\") pod \"storage-provisioner\" (UID: \"ac7f283e-4d28-46cf-a519-bd227237d5e7\") " pod="kube-system/storage-provisioner"
	Sep 17 00:13:37 ha-472903 kubelet[620]: I0917 00:13:37.056986     620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="021b917bf994c60a5ce7bb1b5d713b5b" path="/var/lib/kubelet/pods/021b917bf994c60a5ce7bb1b5d713b5b/volumes"
	Sep 17 00:13:38 ha-472903 kubelet[620]: I0917 00:13:38.149724     620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 17 00:13:44 ha-472903 kubelet[620]: I0917 00:13:44.396062     620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 17 00:13:44 ha-472903 kubelet[620]: I0917 00:13:44.750098     620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 17 00:14:07 ha-472903 kubelet[620]: I0917 00:14:07.229109     620 scope.go:117] "RemoveContainer" containerID="5a5a17cca6c0a643b6c0881dab5508dcb7de8e6ad77d7e6ecb81d434ab2cc8a1"
	Sep 17 00:14:07 ha-472903 kubelet[620]: I0917 00:14:07.229537     620 scope.go:117] "RemoveContainer" containerID="360a9ae449a3affbb5373c19b5e7e14e1da3ec8397f5e21f1d3c31e298455266"
	Sep 17 00:14:07 ha-472903 kubelet[620]: E0917 00:14:07.229764     620 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ac7f283e-4d28-46cf-a519-bd227237d5e7)\"" pod="kube-system/storage-provisioner" podUID="ac7f283e-4d28-46cf-a519-bd227237d5e7"
	Sep 17 00:14:20 ha-472903 kubelet[620]: I0917 00:14:20.052702     620 scope.go:117] "RemoveContainer" containerID="360a9ae449a3affbb5373c19b5e7e14e1da3ec8397f5e21f1d3c31e298455266"
	Sep 17 00:14:29 ha-472903 kubelet[620]: I0917 00:14:29.046747     620 scope.go:117] "RemoveContainer" containerID="8683544e2a9d579448e28b8f33653e2c8d1315b2d07bd7b4ce574428d93c6f3a"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-472903 -n ha-472903
helpers_test.go:269: (dbg) Run:  kubectl --context ha-472903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-wkqz5
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-472903 describe pod busybox-7b57f96db7-wkqz5
helpers_test.go:290: (dbg) kubectl --context ha-472903 describe pod busybox-7b57f96db7-wkqz5:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-wkqz5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bvn6l (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-bvn6l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  11s   default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11s   default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11s   default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  10s   default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  10s   default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  10s   default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (357.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0917 00:20:37.160365  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:20:49.958343  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:25:37.159662  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:25:49.959832  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: signal: killed (5m55.488182136s)

                                                
                                                
-- stdout --
	* [ha-472903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-472903" primary control-plane node in "ha-472903" cluster
	* Pulling base image v0.0.48 ...
	* Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	* Enabled addons: 
	
	* Starting "ha-472903-m02" control-plane node in "ha-472903" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-472903-m04" worker node in "ha-472903" cluster
	* Pulling base image v0.0.48 ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:20:34.817765  853162 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:20:34.818050  853162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:20:34.818060  853162 out.go:374] Setting ErrFile to fd 2...
	I0917 00:20:34.818066  853162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:20:34.818300  853162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:20:34.818768  853162 out.go:368] Setting JSON to false
	I0917 00:20:34.819697  853162 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":10977,"bootTime":1758057458,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:20:34.819797  853162 start.go:140] virtualization: kvm guest
	I0917 00:20:34.821739  853162 out.go:179] * [ha-472903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:20:34.822814  853162 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:20:34.822810  853162 notify.go:220] Checking for updates...
	I0917 00:20:34.823822  853162 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:20:34.824878  853162 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:20:34.825935  853162 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0917 00:20:34.827008  853162 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:20:34.827960  853162 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:20:34.829312  853162 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:34.829841  853162 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:20:34.852217  853162 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:20:34.852298  853162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:20:34.905027  853162 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:20:34.895284982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:20:34.905128  853162 docker.go:318] overlay module found
	I0917 00:20:34.906623  853162 out.go:179] * Using the docker driver based on existing profile
	I0917 00:20:34.907564  853162 start.go:304] selected driver: docker
	I0917 00:20:34.907577  853162 start.go:918] validating driver "docker" against &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false
kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:20:34.907679  853162 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:20:34.907759  853162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:20:34.958491  853162 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:20:34.949319318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:20:34.959144  853162 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:20:34.959172  853162 cni.go:84] Creating CNI manager for ""
	I0917 00:20:34.959227  853162 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 00:20:34.959277  853162 start.go:348] cluster config:
	{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I0917 00:20:34.960902  853162 out.go:179] * Starting "ha-472903" primary control-plane node in "ha-472903" cluster
	I0917 00:20:34.961889  853162 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:20:34.962922  853162 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:20:34.963776  853162 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:20:34.963806  853162 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0917 00:20:34.963815  853162 cache.go:58] Caching tarball of preloaded images
	I0917 00:20:34.963858  853162 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:20:34.963914  853162 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:20:34.963928  853162 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:20:34.964067  853162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:20:34.983072  853162 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:20:34.983091  853162 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:20:34.983104  853162 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:20:34.983123  853162 start.go:360] acquireMachinesLock for ha-472903: {Name:mk994658ce3314f2aed1dec341debc49d36a4326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:20:34.983187  853162 start.go:364] duration metric: took 36µs to acquireMachinesLock for "ha-472903"
	I0917 00:20:34.983204  853162 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:20:34.983209  853162 fix.go:54] fixHost starting: 
	I0917 00:20:34.983403  853162 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:20:34.999350  853162 fix.go:112] recreateIfNeeded on ha-472903: state=Stopped err=<nil>
	W0917 00:20:34.999373  853162 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:20:35.001788  853162 out.go:252] * Restarting existing docker container for "ha-472903" ...
	I0917 00:20:35.001837  853162 cli_runner.go:164] Run: docker start ha-472903
	I0917 00:20:35.217740  853162 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:20:35.235869  853162 kic.go:430] container "ha-472903" state is running.
	I0917 00:20:35.236239  853162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:20:35.254029  853162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:20:35.254260  853162 machine.go:93] provisionDockerMachine start ...
	I0917 00:20:35.254344  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:35.272313  853162 main.go:141] libmachine: Using SSH client type: native
	I0917 00:20:35.272572  853162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33599 <nil> <nil>}
	I0917 00:20:35.272585  853162 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:20:35.273254  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40730->127.0.0.1:33599: read: connection reset by peer
	I0917 00:20:38.407730  853162 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0917 00:20:38.407755  853162 ubuntu.go:182] provisioning hostname "ha-472903"
	I0917 00:20:38.407804  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:38.425553  853162 main.go:141] libmachine: Using SSH client type: native
	I0917 00:20:38.425819  853162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33599 <nil> <nil>}
	I0917 00:20:38.425837  853162 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903 && echo "ha-472903" | sudo tee /etc/hostname
	I0917 00:20:38.569475  853162 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0917 00:20:38.569616  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:38.586741  853162 main.go:141] libmachine: Using SSH client type: native
	I0917 00:20:38.586939  853162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33599 <nil> <nil>}
	I0917 00:20:38.586958  853162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:20:38.719202  853162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:20:38.719228  853162 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:20:38.719252  853162 ubuntu.go:190] setting up certificates
	I0917 00:20:38.719262  853162 provision.go:84] configureAuth start
	I0917 00:20:38.719317  853162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:20:38.736502  853162 provision.go:143] copyHostCerts
	I0917 00:20:38.736535  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:20:38.736560  853162 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:20:38.736573  853162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:20:38.736639  853162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:20:38.736722  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:20:38.736740  853162 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:20:38.736746  853162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:20:38.736779  853162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:20:38.736839  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:20:38.736856  853162 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:20:38.736863  853162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:20:38.736886  853162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:20:38.736955  853162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903 san=[127.0.0.1 192.168.49.2 ha-472903 localhost minikube]
	I0917 00:20:39.436644  853162 provision.go:177] copyRemoteCerts
	I0917 00:20:39.436706  853162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:20:39.436743  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:39.454217  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:20:39.550482  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:20:39.550536  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:20:39.574507  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:20:39.574569  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 00:20:39.597445  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:20:39.597494  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:20:39.620037  853162 provision.go:87] duration metric: took 900.762496ms to configureAuth
	I0917 00:20:39.620063  853162 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:20:39.620256  853162 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:39.620269  853162 machine.go:96] duration metric: took 4.365996012s to provisionDockerMachine
	I0917 00:20:39.620277  853162 start.go:293] postStartSetup for "ha-472903" (driver="docker")
	I0917 00:20:39.620285  853162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:20:39.620327  853162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:20:39.620359  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:39.637486  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:20:39.732442  853162 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:20:39.735602  853162 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:20:39.735625  853162 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:20:39.735632  853162 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:20:39.735639  853162 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:20:39.735651  853162 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:20:39.735695  853162 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:20:39.735772  853162 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:20:39.735783  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0917 00:20:39.735865  853162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:20:39.744181  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:20:39.766920  853162 start.go:296] duration metric: took 146.623151ms for postStartSetup
	I0917 00:20:39.766994  853162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:20:39.767036  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:39.784014  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:20:39.873850  853162 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:20:39.877996  853162 fix.go:56] duration metric: took 4.894780852s for fixHost
	I0917 00:20:39.878018  853162 start.go:83] releasing machines lock for "ha-472903", held for 4.894820875s
	I0917 00:20:39.878073  853162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:20:39.894846  853162 ssh_runner.go:195] Run: cat /version.json
	I0917 00:20:39.894889  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:39.894941  853162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:20:39.895004  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:39.911644  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:20:39.912047  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:20:40.078918  853162 ssh_runner.go:195] Run: systemctl --version
	I0917 00:20:40.083613  853162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:20:40.087967  853162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:20:40.105776  853162 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:20:40.105849  853162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:20:40.114111  853162 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:20:40.114131  853162 start.go:495] detecting cgroup driver to use...
	I0917 00:20:40.114160  853162 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:20:40.114211  853162 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:20:40.126822  853162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:20:40.137466  853162 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:20:40.137505  853162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:20:40.149166  853162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:20:40.159495  853162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:20:40.221061  853162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:20:40.285639  853162 docker.go:234] disabling docker service ...
	I0917 00:20:40.285699  853162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:20:40.297018  853162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:20:40.307232  853162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:20:40.368438  853162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:20:40.428251  853162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:20:40.438823  853162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:20:40.454486  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:20:40.463862  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:20:40.473036  853162 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:20:40.473078  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:20:40.482289  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:20:40.491479  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:20:40.500319  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:20:40.509542  853162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:20:40.518237  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:20:40.527237  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:20:40.536342  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:20:40.545444  853162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:20:40.553515  853162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:20:40.561529  853162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:20:40.624992  853162 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:20:40.738118  853162 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:20:40.738194  853162 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:20:40.742637  853162 start.go:563] Will wait 60s for crictl version
	I0917 00:20:40.742675  853162 ssh_runner.go:195] Run: which crictl
	I0917 00:20:40.746234  853162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:20:40.779868  853162 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:20:40.779923  853162 ssh_runner.go:195] Run: containerd --version
	I0917 00:20:40.803110  853162 ssh_runner.go:195] Run: containerd --version
	I0917 00:20:40.828669  853162 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:20:40.829724  853162 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:20:40.846721  853162 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:20:40.850282  853162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:20:40.861751  853162 kubeadm.go:875] updating cluster {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socke
tVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:20:40.861890  853162 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:20:40.861944  853162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:20:40.894814  853162 containerd.go:627] all images are preloaded for containerd runtime.
	I0917 00:20:40.894873  853162 containerd.go:534] Images already preloaded, skipping extraction
	I0917 00:20:40.894952  853162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:20:40.926254  853162 containerd.go:627] all images are preloaded for containerd runtime.
	I0917 00:20:40.926274  853162 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:20:40.926282  853162 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0917 00:20:40.926376  853162 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:20:40.926447  853162 ssh_runner.go:195] Run: sudo crictl info
	I0917 00:20:40.958896  853162 cni.go:84] Creating CNI manager for ""
	I0917 00:20:40.958916  853162 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 00:20:40.958927  853162 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:20:40.958949  853162 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-472903 NodeName:ha-472903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:20:40.959064  853162 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-472903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:20:40.959093  853162 kube-vip.go:115] generating kube-vip config ...
	I0917 00:20:40.959125  853162 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:20:40.971220  853162 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:20:40.971326  853162 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:20:40.971379  853162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:20:40.979716  853162 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:20:40.979776  853162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:20:40.987725  853162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0917 00:20:41.004548  853162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:20:41.021543  853162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0917 00:20:41.038428  853162 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:20:41.055058  853162 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:20:41.058425  853162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:20:41.068877  853162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:20:41.128444  853162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:20:41.150361  853162 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.2
	I0917 00:20:41.150384  853162 certs.go:194] generating shared ca certs ...
	I0917 00:20:41.150406  853162 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:20:41.150585  853162 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:20:41.150648  853162 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:20:41.150665  853162 certs.go:256] generating profile certs ...
	I0917 00:20:41.150759  853162 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0917 00:20:41.150787  853162 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.7fe3fff7
	I0917 00:20:41.150803  853162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.7fe3fff7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0917 00:20:41.284811  853162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.7fe3fff7 ...
	I0917 00:20:41.284845  853162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.7fe3fff7: {Name:mk4e25c9a0c911945cadf30fa1e7c0959be02913 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:20:41.285054  853162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.7fe3fff7 ...
	I0917 00:20:41.285080  853162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.7fe3fff7: {Name:mkcad529b0b61f0240d944f5171e144f38a585c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:20:41.285202  853162 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.7fe3fff7 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0917 00:20:41.285351  853162 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.7fe3fff7 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0917 00:20:41.285542  853162 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0917 00:20:41.285560  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:20:41.285573  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:20:41.285586  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:20:41.285597  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:20:41.285608  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:20:41.285619  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:20:41.285629  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:20:41.285639  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:20:41.285695  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:20:41.285728  853162 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:20:41.285737  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:20:41.285757  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:20:41.285778  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:20:41.285798  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:20:41.285835  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:20:41.285861  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:20:41.285875  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0917 00:20:41.285887  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0917 00:20:41.286470  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:20:41.314507  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:20:41.342333  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:20:41.368008  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:20:41.391633  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 00:20:41.414593  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:20:41.436880  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:20:41.459519  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:20:41.482263  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:20:41.505016  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:20:41.528016  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:20:41.550286  853162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:20:41.567962  853162 ssh_runner.go:195] Run: openssl version
	I0917 00:20:41.573075  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:20:41.582025  853162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:20:41.585261  853162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:20:41.585312  853162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:20:41.591623  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:20:41.599661  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:20:41.608629  853162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:20:41.611857  853162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:20:41.611901  853162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:20:41.618330  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:20:41.626513  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:20:41.635455  853162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:20:41.638741  853162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:20:41.638775  853162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:20:41.645252  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:20:41.654246  853162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:20:41.657902  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:20:41.664731  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:20:41.673674  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:20:41.682122  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:20:41.691803  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:20:41.701302  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:20:41.709987  853162 kubeadm.go:392] StartCluster: {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:20:41.710184  853162 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0917 00:20:41.710238  853162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:20:41.761776  853162 cri.go:89] found id: "b5592b8113e586d14715b024e8de6717e69df30cb94f4109a26ff3cab584226d"
	I0917 00:20:41.761804  853162 cri.go:89] found id: "c8a737e1be33c6e4b6e17f5359483d22d3eeb7ca2497546109c1097eb9343a7f"
	I0917 00:20:41.761810  853162 cri.go:89] found id: "2a56abb41f49d6755de68bb41070eee7c07fee5950b2584042a3850228b3c274"
	I0917 00:20:41.761815  853162 cri.go:89] found id: "aeea8f1127caf7117ade119a9e492104789925a531209d0aba3022cd18cb7ce1"
	I0917 00:20:41.761820  853162 cri.go:89] found id: "9fc46931c7aae5fea2058b723439b03184beee352ff9a7efcf262818181a635d"
	I0917 00:20:41.761825  853162 cri.go:89] found id: "b1c8344888d7deab1a3203bf9e16eefcb945905ec04b591acfb2fed3104948ec"
	I0917 00:20:41.761829  853162 cri.go:89] found id: "9685cc588651ced2d51ab783a94533fff6a60971435eaa8e11982eb715ef5350"
	I0917 00:20:41.761833  853162 cri.go:89] found id: "c3f8ee22fca28b303f553c3003d1000b80565b4147ba719401c8c5f61921ee41"
	I0917 00:20:41.761838  853162 cri.go:89] found id: "96d46a46d90937e1dc254cbb641e1f12887151faabbe128f2cc51a8a833fe573"
	I0917 00:20:41.761846  853162 cri.go:89] found id: "90b187ed887fae063d0e3d6e7f9316abbc50f1e7b9c092596b43a1c43c86e79d"
	I0917 00:20:41.761850  853162 cri.go:89] found id: ""
	I0917 00:20:41.761898  853162 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0917 00:20:41.786684  853162 cri.go:116] JSON = [{"ociVersion":"1.2.0","id":"6fdef4164f99d27df21ab6d287cff2c0f203d8a05664455c8f947bb4cd8426c9","pid":1059,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fdef4164f99d27df21ab6d287cff2c0f203d8a05664455c8f947bb4cd8426c9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fdef4164f99d27df21ab6d287cff2c0f203d8a05664455c8f947bb4cd8426c9/rootfs","created":"2025-09-17T00:20:41.755974156Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"6fdef4164f99d27df21ab6d287cff2c0f203d8a05664455c8f947bb4cd8426c9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-ha-472903_b57b4e111181f4c4157cb1fbf888e56c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-ha-472903","io.kubernete
s.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b57b4e111181f4c4157cb1fbf888e56c"},"owner":"root"},{"ociVersion":"1.2.0","id":"bcfa57226a9577e56259b5c1334edc792fe612c5f16df69faaae8a23e78e3d42","pid":1034,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bcfa57226a9577e56259b5c1334edc792fe612c5f16df69faaae8a23e78e3d42","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bcfa57226a9577e56259b5c1334edc792fe612c5f16df69faaae8a23e78e3d42/rootfs","created":"2025-09-17T00:20:41.732625278Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bcfa57226a9577e56259b5c1334edc792fe612c5f16df69faaae8a23e78e3d42","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-ha-472903_2227061675da4ed34922d350d8862f72","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.
sandbox-name":"etcd-ha-472903","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2227061675da4ed34922d350d8862f72"},"owner":"root"},{"ociVersion":"1.2.0","id":"e3f12221b67dba9fbc3b6d224220f55644b041f5627781f262665ff7739202c5","pid":1069,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3f12221b67dba9fbc3b6d224220f55644b041f5627781f262665ff7739202c5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3f12221b67dba9fbc3b6d224220f55644b041f5627781f262665ff7739202c5/rootfs","created":"2025-09-17T00:20:41.758018269Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"e3f12221b67dba9fbc3b6d224220f55644b041f5627781f262665ff7739202c5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-ha-472903_485e4849d897bc38a8d0e2cce5ff1
09b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-ha-472903","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"485e4849d897bc38a8d0e2cce5ff109b"},"owner":"root"},{"ociVersion":"1.2.0","id":"f5b6b96a38f0f8c3f34ca3c58bfdb904e4f2e4b6e569d9f7eb15a4f80fa3f460","pid":1068,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5b6b96a38f0f8c3f34ca3c58bfdb904e4f2e4b6e569d9f7eb15a4f80fa3f460","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5b6b96a38f0f8c3f34ca3c58bfdb904e4f2e4b6e569d9f7eb15a4f80fa3f460/rootfs","created":"2025-09-17T00:20:41.758888329Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f5b6b96a38f0f8c3f34ca3c58bfdb904e4f2e4b6e569d9f7eb15a4f80fa3f460","io.kubernetes.cri.sandbox-log-directory":"/var/log/
pods/kube-system_kube-vip-ha-472903_3d9fc459b47fceff3d235003420b1a14","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-vip-ha-472903","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3d9fc459b47fceff3d235003420b1a14"},"owner":"root"},{"ociVersion":"1.2.0","id":"fb2e9453a484b4c1536899e257ec541863f125a6f8ab0a620307af36a275f1f1","pid":1044,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb2e9453a484b4c1536899e257ec541863f125a6f8ab0a620307af36a275f1f1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb2e9453a484b4c1536899e257ec541863f125a6f8ab0a620307af36a275f1f1/rootfs","created":"2025-09-17T00:20:41.734135829Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"fb2e9453a484b4c1536899e257ec541863f125a6f8ab0a620307af36a275f1f1",
"io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-ha-472903_837a6c7c1a3b42ddee2d42c480d95c76","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-ha-472903","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"837a6c7c1a3b42ddee2d42c480d95c76"},"owner":"root"}]
	I0917 00:20:41.786840  853162 cri.go:126] list returned 5 containers
	I0917 00:20:41.786852  853162 cri.go:129] container: {ID:6fdef4164f99d27df21ab6d287cff2c0f203d8a05664455c8f947bb4cd8426c9 Status:created}
	I0917 00:20:41.786930  853162 cri.go:131] skipping 6fdef4164f99d27df21ab6d287cff2c0f203d8a05664455c8f947bb4cd8426c9 - not in ps
	I0917 00:20:41.786946  853162 cri.go:129] container: {ID:bcfa57226a9577e56259b5c1334edc792fe612c5f16df69faaae8a23e78e3d42 Status:running}
	I0917 00:20:41.786959  853162 cri.go:131] skipping bcfa57226a9577e56259b5c1334edc792fe612c5f16df69faaae8a23e78e3d42 - not in ps
	I0917 00:20:41.786967  853162 cri.go:129] container: {ID:e3f12221b67dba9fbc3b6d224220f55644b041f5627781f262665ff7739202c5 Status:created}
	I0917 00:20:41.786972  853162 cri.go:131] skipping e3f12221b67dba9fbc3b6d224220f55644b041f5627781f262665ff7739202c5 - not in ps
	I0917 00:20:41.786977  853162 cri.go:129] container: {ID:f5b6b96a38f0f8c3f34ca3c58bfdb904e4f2e4b6e569d9f7eb15a4f80fa3f460 Status:created}
	I0917 00:20:41.786995  853162 cri.go:131] skipping f5b6b96a38f0f8c3f34ca3c58bfdb904e4f2e4b6e569d9f7eb15a4f80fa3f460 - not in ps
	I0917 00:20:41.786999  853162 cri.go:129] container: {ID:fb2e9453a484b4c1536899e257ec541863f125a6f8ab0a620307af36a275f1f1 Status:running}
	I0917 00:20:41.787003  853162 cri.go:131] skipping fb2e9453a484b4c1536899e257ec541863f125a6f8ab0a620307af36a275f1f1 - not in ps
	I0917 00:20:41.787065  853162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:20:41.800337  853162 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:20:41.800363  853162 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:20:41.800467  853162 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:20:41.812736  853162 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:20:41.813158  853162 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-472903" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:20:41.813314  853162 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-749120/kubeconfig needs updating (will repair): [kubeconfig missing "ha-472903" cluster setting kubeconfig missing "ha-472903" context setting]
	I0917 00:20:41.813672  853162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:20:41.814257  853162 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:20:41.814963  853162 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:20:41.814974  853162 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:20:41.814988  853162 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:20:41.814996  853162 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:20:41.815001  853162 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:20:41.815006  853162 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:20:41.815574  853162 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:20:41.827605  853162 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:20:41.827639  853162 kubeadm.go:593] duration metric: took 27.268873ms to restartPrimaryControlPlane
	I0917 00:20:41.827648  853162 kubeadm.go:394] duration metric: took 117.678147ms to StartCluster
	I0917 00:20:41.827665  853162 settings.go:142] acquiring lock: {Name:mk6c1a5bee23e141aad5180323c16c47ed580ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:20:41.827744  853162 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:20:41.828336  853162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:20:41.828600  853162 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:20:41.828622  853162 start.go:241] waiting for startup goroutines ...
	I0917 00:20:41.828631  853162 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:20:41.828875  853162 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:41.831318  853162 out.go:179] * Enabled addons: 
	I0917 00:20:41.832539  853162 addons.go:514] duration metric: took 3.901981ms for enable addons: enabled=[]
	I0917 00:20:41.832574  853162 start.go:246] waiting for cluster config update ...
	I0917 00:20:41.832585  853162 start.go:255] writing updated cluster config ...
	I0917 00:20:41.834551  853162 out.go:203] 
	I0917 00:20:41.835727  853162 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:41.835814  853162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:20:41.838024  853162 out.go:179] * Starting "ha-472903-m02" control-plane node in "ha-472903" cluster
	I0917 00:20:41.839194  853162 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:20:41.841484  853162 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:20:41.842456  853162 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:20:41.842487  853162 cache.go:58] Caching tarball of preloaded images
	I0917 00:20:41.842550  853162 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:20:41.842589  853162 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:20:41.842599  853162 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:20:41.842736  853162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:20:41.865763  853162 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:20:41.865788  853162 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:20:41.865814  853162 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:20:41.865851  853162 start.go:360] acquireMachinesLock for ha-472903-m02: {Name:mk81d8c73856cf84ceff1767a1681f3f3cdab773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:20:41.865946  853162 start.go:364] duration metric: took 47.876µs to acquireMachinesLock for "ha-472903-m02"
	I0917 00:20:41.865986  853162 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:20:41.866000  853162 fix.go:54] fixHost starting: m02
	I0917 00:20:41.866330  853162 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:20:41.891138  853162 fix.go:112] recreateIfNeeded on ha-472903-m02: state=Stopped err=<nil>
	W0917 00:20:41.891172  853162 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:20:41.892758  853162 out.go:252] * Restarting existing docker container for "ha-472903-m02" ...
	I0917 00:20:41.892836  853162 cli_runner.go:164] Run: docker start ha-472903-m02
	I0917 00:20:42.162399  853162 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:20:42.182746  853162 kic.go:430] container "ha-472903-m02" state is running.
	I0917 00:20:42.183063  853162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:20:42.202634  853162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:20:42.202870  853162 machine.go:93] provisionDockerMachine start ...
	I0917 00:20:42.202934  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:42.222574  853162 main.go:141] libmachine: Using SSH client type: native
	I0917 00:20:42.222817  853162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33604 <nil> <nil>}
	I0917 00:20:42.222833  853162 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:20:42.223554  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57840->127.0.0.1:33604: read: connection reset by peer
	I0917 00:20:45.357833  853162 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0917 00:20:45.357866  853162 ubuntu.go:182] provisioning hostname "ha-472903-m02"
	I0917 00:20:45.357933  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:45.375079  853162 main.go:141] libmachine: Using SSH client type: native
	I0917 00:20:45.375292  853162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33604 <nil> <nil>}
	I0917 00:20:45.375306  853162 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m02 && echo "ha-472903-m02" | sudo tee /etc/hostname
	I0917 00:20:45.521107  853162 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0917 00:20:45.521182  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:45.538481  853162 main.go:141] libmachine: Using SSH client type: native
	I0917 00:20:45.538686  853162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33604 <nil> <nil>}
	I0917 00:20:45.538702  853162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:20:45.686858  853162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:20:45.686886  853162 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:20:45.686908  853162 ubuntu.go:190] setting up certificates
	I0917 00:20:45.686921  853162 provision.go:84] configureAuth start
	I0917 00:20:45.686989  853162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:20:45.709916  853162 provision.go:143] copyHostCerts
	I0917 00:20:45.709958  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:20:45.710001  853162 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:20:45.710013  853162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:20:45.710094  853162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:20:45.710217  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:20:45.710252  853162 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:20:45.710259  853162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:20:45.710316  853162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:20:45.710546  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:20:45.710580  853162 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:20:45.710587  853162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:20:45.710634  853162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:20:45.710760  853162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m02 san=[127.0.0.1 192.168.49.3 ha-472903-m02 localhost minikube]
	I0917 00:20:45.966214  853162 provision.go:177] copyRemoteCerts
	I0917 00:20:45.966288  853162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:20:45.966340  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:45.993644  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33604 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:20:46.100720  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:20:46.100794  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:20:46.126316  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:20:46.126385  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:20:46.153622  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:20:46.153689  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:20:46.186741  853162 provision.go:87] duration metric: took 499.804986ms to configureAuth
	I0917 00:20:46.186774  853162 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:20:46.187065  853162 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:46.187081  853162 machine.go:96] duration metric: took 3.984198071s to provisionDockerMachine
	I0917 00:20:46.187091  853162 start.go:293] postStartSetup for "ha-472903-m02" (driver="docker")
	I0917 00:20:46.187103  853162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:20:46.187161  853162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:20:46.187217  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:46.206089  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33604 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:20:46.307427  853162 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:20:46.311177  853162 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:20:46.311217  853162 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:20:46.311234  853162 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:20:46.311243  853162 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:20:46.311260  853162 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:20:46.311321  853162 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:20:46.311449  853162 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:20:46.311463  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0917 00:20:46.311587  853162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:20:46.320721  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:20:46.344942  853162 start.go:296] duration metric: took 157.832568ms for postStartSetup
	I0917 00:20:46.345031  853162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:20:46.345086  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:46.363814  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33604 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:20:46.481346  853162 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:20:46.496931  853162 fix.go:56] duration metric: took 4.630916775s for fixHost
	I0917 00:20:46.496962  853162 start.go:83] releasing machines lock for "ha-472903-m02", held for 4.630985515s
	I0917 00:20:46.497035  853162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:20:46.537374  853162 out.go:179] * Found network options:
	I0917 00:20:46.538638  853162 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:20:46.539662  853162 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:20:46.539717  853162 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:20:46.539804  853162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:20:46.539839  853162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:20:46.539854  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:46.539932  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:46.572153  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33604 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:20:46.578652  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33604 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:20:46.683527  853162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:20:46.811849  853162 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:20:46.811930  853162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:20:46.822453  853162 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:20:46.822480  853162 start.go:495] detecting cgroup driver to use...
	I0917 00:20:46.822516  853162 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:20:46.822567  853162 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:20:46.838817  853162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:20:46.851233  853162 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:20:46.851313  853162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:20:46.866239  853162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:20:46.878018  853162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:20:46.975963  853162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:20:47.072966  853162 docker.go:234] disabling docker service ...
	I0917 00:20:47.073043  853162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:20:47.086710  853162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:20:47.097597  853162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:20:47.193710  853162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:20:47.309500  853162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:20:47.324077  853162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:20:47.342485  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:20:47.353031  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:20:47.365974  853162 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:20:47.366037  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:20:47.377743  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:20:47.390203  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:20:47.403773  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:20:47.415555  853162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:20:47.427011  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:20:47.439059  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:20:47.457755  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:20:47.476067  853162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:20:47.485475  853162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:20:47.493856  853162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:20:47.640430  853162 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:20:47.965745  853162 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:20:47.965818  853162 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:20:47.971657  853162 start.go:563] Will wait 60s for crictl version
	I0917 00:20:47.971722  853162 ssh_runner.go:195] Run: which crictl
	I0917 00:20:47.976692  853162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:20:48.022291  853162 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:20:48.022364  853162 ssh_runner.go:195] Run: containerd --version
	I0917 00:20:48.051041  853162 ssh_runner.go:195] Run: containerd --version
	I0917 00:20:48.079146  853162 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:20:48.080252  853162 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:20:48.081331  853162 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:20:48.097998  853162 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:20:48.102017  853162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:20:48.114518  853162 mustload.go:65] Loading cluster: ha-472903
	I0917 00:20:48.114732  853162 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:48.115010  853162 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:20:48.131516  853162 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:20:48.131726  853162 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.3
	I0917 00:20:48.131736  853162 certs.go:194] generating shared ca certs ...
	I0917 00:20:48.131750  853162 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:20:48.131851  853162 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:20:48.131885  853162 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:20:48.131894  853162 certs.go:256] generating profile certs ...
	I0917 00:20:48.131975  853162 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0917 00:20:48.132015  853162 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a
	I0917 00:20:48.132061  853162 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0917 00:20:48.132076  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:20:48.132094  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:20:48.132107  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:20:48.132119  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:20:48.132133  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:20:48.132146  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:20:48.132158  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:20:48.132170  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:20:48.132219  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:20:48.132247  853162 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:20:48.132256  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:20:48.132276  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:20:48.132298  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:20:48.132320  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:20:48.132366  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:20:48.132391  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0917 00:20:48.132426  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:20:48.132448  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0917 00:20:48.132506  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:48.148623  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:20:48.235670  853162 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:20:48.239525  853162 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:20:48.251602  853162 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:20:48.254862  853162 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:20:48.266491  853162 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:20:48.269542  853162 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:20:48.281769  853162 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:20:48.284982  853162 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:20:48.296565  853162 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:20:48.300092  853162 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:20:48.317833  853162 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:20:48.321346  853162 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 00:20:48.335793  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:20:48.363041  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:20:48.387573  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:20:48.411906  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:20:48.435720  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 00:20:48.458754  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:20:48.481880  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:20:48.504634  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:20:48.528137  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:20:48.550768  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:20:48.573488  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:20:48.596469  853162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:20:48.614086  853162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:20:48.630956  853162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:20:48.648596  853162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:20:48.667549  853162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:20:48.687524  853162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 00:20:48.709133  853162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:20:48.732722  853162 ssh_runner.go:195] Run: openssl version
	I0917 00:20:48.739812  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:20:48.752002  853162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:20:48.755921  853162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:20:48.755972  853162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:20:48.764121  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:20:48.775444  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:20:48.786765  853162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:20:48.791603  853162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:20:48.791655  853162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:20:48.800012  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:20:48.810322  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:20:48.820434  853162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:20:48.823957  853162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:20:48.824004  853162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:20:48.830646  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:20:48.839222  853162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:20:48.842563  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:20:48.849046  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:20:48.855529  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:20:48.861801  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:20:48.868136  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:20:48.874594  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:20:48.880788  853162 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0917 00:20:48.880874  853162 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:20:48.880898  853162 kube-vip.go:115] generating kube-vip config ...
	I0917 00:20:48.880935  853162 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:20:48.893271  853162 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:20:48.893323  853162 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:20:48.893358  853162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:20:48.901765  853162 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:20:48.901815  853162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:20:48.910619  853162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 00:20:48.929104  853162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:20:48.946566  853162 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:20:48.963876  853162 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:20:48.967541  853162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:20:48.978442  853162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:20:49.084866  853162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:20:49.097751  853162 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:20:49.098032  853162 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:49.099974  853162 out.go:179] * Verifying Kubernetes components...
	I0917 00:20:49.101123  853162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:20:49.204289  853162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:20:49.220147  853162 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:20:49.220241  853162 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:20:49.220551  853162 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m02" to be "Ready" ...
	I0917 00:20:49.229156  853162 node_ready.go:49] node "ha-472903-m02" is "Ready"
	I0917 00:20:49.229189  853162 node_ready.go:38] duration metric: took 8.616304ms for node "ha-472903-m02" to be "Ready" ...
	I0917 00:20:49.229204  853162 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:20:49.229258  853162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:20:49.240716  853162 api_server.go:72] duration metric: took 142.915997ms to wait for apiserver process to appear ...
	I0917 00:20:49.240740  853162 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:20:49.240759  853162 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:20:49.245513  853162 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:20:49.246360  853162 api_server.go:141] control plane version: v1.34.0
	I0917 00:20:49.246384  853162 api_server.go:131] duration metric: took 5.636359ms to wait for apiserver health ...
	I0917 00:20:49.246392  853162 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:20:49.252346  853162 system_pods.go:59] 24 kube-system pods found
	I0917 00:20:49.252376  853162 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:20:49.252387  853162 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:20:49.252399  853162 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:20:49.252408  853162 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:20:49.252425  853162 system_pods.go:61] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running
	I0917 00:20:49.252435  853162 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:20:49.252442  853162 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:20:49.252451  853162 system_pods.go:61] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:20:49.252456  853162 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:20:49.252462  853162 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:20:49.252466  853162 system_pods.go:61] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running
	I0917 00:20:49.252474  853162 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:20:49.252481  853162 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:20:49.252485  853162 system_pods.go:61] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running
	I0917 00:20:49.252493  853162 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:20:49.252496  853162 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:20:49.252513  853162 system_pods.go:61] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:20:49.252520  853162 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:20:49.252526  853162 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:20:49.252533  853162 system_pods.go:61] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running
	I0917 00:20:49.252537  853162 system_pods.go:61] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:20:49.252540  853162 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:20:49.252543  853162 system_pods.go:61] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:20:49.252547  853162 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:20:49.252551  853162 system_pods.go:74] duration metric: took 6.154731ms to wait for pod list to return data ...
	I0917 00:20:49.252558  853162 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:20:49.254869  853162 default_sa.go:45] found service account: "default"
	I0917 00:20:49.254888  853162 default_sa.go:55] duration metric: took 2.323687ms for default service account to be created ...
	I0917 00:20:49.254897  853162 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:20:49.260719  853162 system_pods.go:86] 24 kube-system pods found
	I0917 00:20:49.260744  853162 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:20:49.260753  853162 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:20:49.260765  853162 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:20:49.260777  853162 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:20:49.260786  853162 system_pods.go:89] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running
	I0917 00:20:49.260793  853162 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:20:49.260805  853162 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:20:49.260813  853162 system_pods.go:89] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:20:49.260819  853162 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:20:49.260827  853162 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:20:49.260832  853162 system_pods.go:89] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running
	I0917 00:20:49.260840  853162 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:20:49.260845  853162 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:20:49.260853  853162 system_pods.go:89] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running
	I0917 00:20:49.260859  853162 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:20:49.260862  853162 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:20:49.260869  853162 system_pods.go:89] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:20:49.260880  853162 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:20:49.260893  853162 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:20:49.260903  853162 system_pods.go:89] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running
	I0917 00:20:49.260912  853162 system_pods.go:89] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:20:49.260918  853162 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:20:49.260922  853162 system_pods.go:89] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:20:49.260925  853162 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:20:49.260932  853162 system_pods.go:126] duration metric: took 6.029773ms to wait for k8s-apps to be running ...
	I0917 00:20:49.260939  853162 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:20:49.260991  853162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:20:49.275832  853162 system_svc.go:56] duration metric: took 14.884993ms WaitForService to wait for kubelet
	I0917 00:20:49.275863  853162 kubeadm.go:578] duration metric: took 178.064338ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:20:49.275886  853162 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:20:49.278927  853162 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:20:49.278956  853162 node_conditions.go:123] node cpu capacity is 8
	I0917 00:20:49.278968  853162 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:20:49.278975  853162 node_conditions.go:123] node cpu capacity is 8
	I0917 00:20:49.278979  853162 node_conditions.go:105] duration metric: took 3.087442ms to run NodePressure ...
	I0917 00:20:49.278989  853162 start.go:241] waiting for startup goroutines ...
	I0917 00:20:49.279011  853162 start.go:255] writing updated cluster config ...
	I0917 00:20:49.280899  853162 out.go:203] 
	I0917 00:20:49.282298  853162 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:49.282399  853162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:20:49.283991  853162 out.go:179] * Starting "ha-472903-m04" worker node in "ha-472903" cluster
	I0917 00:20:49.285326  853162 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:20:49.286460  853162 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:20:49.287455  853162 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:20:49.287476  853162 cache.go:58] Caching tarball of preloaded images
	I0917 00:20:49.287528  853162 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:20:49.287571  853162 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:20:49.287584  853162 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:20:49.287709  853162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:20:49.307816  853162 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:20:49.307837  853162 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:20:49.307857  853162 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:20:49.307889  853162 start.go:360] acquireMachinesLock for ha-472903-m04: {Name:mkdbbd0d5b3cd7ad4b13d37f2d562d6d6421c5de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:20:49.307967  853162 start.go:364] duration metric: took 58.431µs to acquireMachinesLock for "ha-472903-m04"
	I0917 00:20:49.307989  853162 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:20:49.307997  853162 fix.go:54] fixHost starting: m04
	I0917 00:20:49.308294  853162 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:20:49.327243  853162 fix.go:112] recreateIfNeeded on ha-472903-m04: state=Stopped err=<nil>
	W0917 00:20:49.327267  853162 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:20:49.328775  853162 out.go:252] * Restarting existing docker container for "ha-472903-m04" ...
	I0917 00:20:49.328840  853162 cli_runner.go:164] Run: docker start ha-472903-m04
	I0917 00:20:49.560781  853162 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:20:49.579860  853162 kic.go:430] container "ha-472903-m04" state is running.
	I0917 00:20:49.580191  853162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m04
	I0917 00:20:49.599441  853162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:20:49.599717  853162 machine.go:93] provisionDockerMachine start ...
	I0917 00:20:49.599798  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	I0917 00:20:49.619815  853162 main.go:141] libmachine: Using SSH client type: native
	I0917 00:20:49.620099  853162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33609 <nil> <nil>}
	I0917 00:20:49.620117  853162 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:20:49.620757  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48712->127.0.0.1:33609: read: connection reset by peer
	I0917 00:20:52.657577  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:55.693322  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:58.729135  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:01.764765  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:04.801662  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:07.838629  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:10.874479  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:13.910566  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:16.946663  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:19.982838  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:23.019051  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:26.055606  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:29.092586  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:32.129307  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:35.165023  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:38.201362  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:41.238243  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:44.274894  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:47.311231  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:50.346548  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:53.382155  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:56.417647  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:59.454124  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:02.489943  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:05.524569  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:08.561054  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:11.596972  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:14.633851  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:17.670632  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:20.706720  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:23.742474  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:26.778365  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:29.814947  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:32.849507  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:35.884812  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:38.921115  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:41.957993  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:44.993151  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:48.030381  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:51.065019  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:54.102634  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:57.139538  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:00.175672  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:03.211394  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:06.246468  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:09.281382  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:12.316511  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:15.352315  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:18.389406  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:21.425527  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:24.462259  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:27.499604  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:30.534707  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:33.570156  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:36.607519  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:39.644455  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:42.680507  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:45.716317  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:48.752201  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:51.753566  853162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:23:51.753615  853162 ubuntu.go:182] provisioning hostname "ha-472903-m04"
	I0917 00:23:51.753683  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	I0917 00:23:51.772834  853162 main.go:141] libmachine: Using SSH client type: native
	I0917 00:23:51.773066  853162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33609 <nil> <nil>}
	I0917 00:23:51.773078  853162 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m04 && echo "ha-472903-m04" | sudo tee /etc/hostname
	I0917 00:23:51.808205  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:54.844740  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:57.881062  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:00.916368  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:03.951873  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:06.988244  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:10.023269  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:13.059288  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:16.095369  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:19.130941  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:22.165674  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:25.200877  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:28.237226  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:31.272916  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:34.309792  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:37.346111  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:40.381702  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:43.416825  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:46.453150  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:49.488904  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:52.524230  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:55.559006  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:58.595706  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:01.631780  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:04.668852  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:07.705815  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:10.741883  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:13.778174  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:16.814616  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:19.851759  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:22.887953  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:25.923578  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:28.960659  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:31.995533  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:35.030980  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:38.068282  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:41.103292  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:44.139821  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:47.174864  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:50.210043  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:53.245796  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:56.282220  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:59.318344  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:02.354126  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:05.389652  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:08.425132  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:11.460937  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:14.496551  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:17.533674  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:20.569685  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:23.606874  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:26.643570  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:29.680754  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 -p ha-472903 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-472903
helpers_test.go:243: (dbg) docker inspect ha-472903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	        "Created": "2025-09-16T23:56:35.178831158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 853358,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:20:35.025672975Z",
	            "FinishedAt": "2025-09-17T00:20:34.368293613Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hostname",
	        "HostsPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/hosts",
	        "LogPath": "/var/lib/docker/containers/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047/05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047-json.log",
	        "Name": "/ha-472903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-472903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-472903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05f03528ecc5ba6a39041bcc2845d236679d61fa3752c15e7e068dac7d8c9047",
	                "LowerDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37229b42d46c992f89d690b880f5a9c43e154eecc2ad5aeee133e9eb30accccb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-472903",
	                "Source": "/var/lib/docker/volumes/ha-472903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-472903",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-472903",
	                "name.minikube.sigs.k8s.io": "ha-472903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e5c4d793d20a56f0e67f641fa3279cbbc87f103ed5242e69a6f5688bc0f14a9",
	            "SandboxKey": "/var/run/docker/netns/6e5c4d793d20",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33599"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33600"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33603"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33601"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33602"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-472903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:90:09:5f:9e:36",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "22d49b2f397dfabc2a3967bd54b05204a52976e683f65ff07bff00e793040bef",
	                    "EndpointID": "1e5cc79cbbccaeb55f50e979752df2f515fe67e4d84df2c7a7b18a858331e94b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-472903",
	                        "05f03528ecc5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-472903 -n ha-472903
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-472903 logs -n 25: (1.464899445s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-472903 cp ha-472903-m03:/home/docker/cp-test.txt ha-472903-m04:/home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test_ha-472903-m03_ha-472903-m04.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp testdata/cp-test.txt ha-472903-m04:/home/docker/cp-test.txt                                                             │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2970888916/001/cp-test_ha-472903-m04.txt │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903:/home/docker/cp-test_ha-472903-m04_ha-472903.txt                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903.txt                                                 │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m02:/home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m02 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m02.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ cp      │ ha-472903 cp ha-472903-m04:/home/docker/cp-test.txt ha-472903-m03:/home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt               │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ ssh     │ ha-472903 ssh -n ha-472903-m03 sudo cat /home/docker/cp-test_ha-472903-m04_ha-472903-m03.txt                                         │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │                     │
	│ node    │ ha-472903 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ node    │ ha-472903 node start m02 --alsologtostderr -v 5                                                                                      │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ node    │ ha-472903 node list --alsologtostderr -v 5                                                                                           │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ stop    │ ha-472903 stop --alsologtostderr -v 5                                                                                                │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:13 UTC │
	│ start   │ ha-472903 start --wait true --alsologtostderr -v 5                                                                                   │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:13 UTC │                     │
	│ node    │ ha-472903 node list --alsologtostderr -v 5                                                                                           │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:19 UTC │                     │
	│ node    │ ha-472903 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:19 UTC │ 17 Sep 25 00:20 UTC │
	│ stop    │ ha-472903 stop --alsologtostderr -v 5                                                                                                │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:20 UTC │ 17 Sep 25 00:20 UTC │
	│ start   │ ha-472903 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd                                   │ ha-472903 │ jenkins │ v1.37.0 │ 17 Sep 25 00:20 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:20:34
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:20:34.817765  853162 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:20:34.818050  853162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:20:34.818060  853162 out.go:374] Setting ErrFile to fd 2...
	I0917 00:20:34.818066  853162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:20:34.818300  853162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:20:34.818768  853162 out.go:368] Setting JSON to false
	I0917 00:20:34.819697  853162 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":10977,"bootTime":1758057458,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:20:34.819797  853162 start.go:140] virtualization: kvm guest
	I0917 00:20:34.821739  853162 out.go:179] * [ha-472903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:20:34.822814  853162 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:20:34.822810  853162 notify.go:220] Checking for updates...
	I0917 00:20:34.823822  853162 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:20:34.824878  853162 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:20:34.825935  853162 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0917 00:20:34.827008  853162 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:20:34.827960  853162 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:20:34.829312  853162 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:34.829841  853162 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:20:34.852217  853162 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:20:34.852298  853162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:20:34.905027  853162 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:20:34.895284982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:20:34.905128  853162 docker.go:318] overlay module found
	I0917 00:20:34.906623  853162 out.go:179] * Using the docker driver based on existing profile
	I0917 00:20:34.907564  853162 start.go:304] selected driver: docker
	I0917 00:20:34.907577  853162 start.go:918] validating driver "docker" against &{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false
kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:20:34.907679  853162 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:20:34.907759  853162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:20:34.958491  853162 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:20:34.949319318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:20:34.959144  853162 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:20:34.959172  853162 cni.go:84] Creating CNI manager for ""
	I0917 00:20:34.959227  853162 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 00:20:34.959277  853162 start.go:348] cluster config:
	{Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I0917 00:20:34.960902  853162 out.go:179] * Starting "ha-472903" primary control-plane node in "ha-472903" cluster
	I0917 00:20:34.961889  853162 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:20:34.962922  853162 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:20:34.963776  853162 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:20:34.963806  853162 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0917 00:20:34.963815  853162 cache.go:58] Caching tarball of preloaded images
	I0917 00:20:34.963858  853162 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:20:34.963914  853162 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:20:34.963928  853162 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:20:34.964067  853162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:20:34.983072  853162 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:20:34.983091  853162 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:20:34.983104  853162 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:20:34.983123  853162 start.go:360] acquireMachinesLock for ha-472903: {Name:mk994658ce3314f2aed1dec341debc49d36a4326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:20:34.983187  853162 start.go:364] duration metric: took 36µs to acquireMachinesLock for "ha-472903"
	I0917 00:20:34.983204  853162 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:20:34.983209  853162 fix.go:54] fixHost starting: 
	I0917 00:20:34.983403  853162 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:20:34.999350  853162 fix.go:112] recreateIfNeeded on ha-472903: state=Stopped err=<nil>
	W0917 00:20:34.999373  853162 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:20:35.001788  853162 out.go:252] * Restarting existing docker container for "ha-472903" ...
	I0917 00:20:35.001837  853162 cli_runner.go:164] Run: docker start ha-472903
	I0917 00:20:35.217740  853162 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:20:35.235869  853162 kic.go:430] container "ha-472903" state is running.
	I0917 00:20:35.236239  853162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:20:35.254029  853162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:20:35.254260  853162 machine.go:93] provisionDockerMachine start ...
	I0917 00:20:35.254344  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:35.272313  853162 main.go:141] libmachine: Using SSH client type: native
	I0917 00:20:35.272572  853162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33599 <nil> <nil>}
	I0917 00:20:35.272585  853162 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:20:35.273254  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40730->127.0.0.1:33599: read: connection reset by peer
	I0917 00:20:38.407730  853162 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0917 00:20:38.407755  853162 ubuntu.go:182] provisioning hostname "ha-472903"
	I0917 00:20:38.407804  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:38.425553  853162 main.go:141] libmachine: Using SSH client type: native
	I0917 00:20:38.425819  853162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33599 <nil> <nil>}
	I0917 00:20:38.425837  853162 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903 && echo "ha-472903" | sudo tee /etc/hostname
	I0917 00:20:38.569475  853162 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903
	
	I0917 00:20:38.569616  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:38.586741  853162 main.go:141] libmachine: Using SSH client type: native
	I0917 00:20:38.586939  853162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33599 <nil> <nil>}
	I0917 00:20:38.586958  853162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:20:38.719202  853162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:20:38.719228  853162 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:20:38.719252  853162 ubuntu.go:190] setting up certificates
	I0917 00:20:38.719262  853162 provision.go:84] configureAuth start
	I0917 00:20:38.719317  853162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:20:38.736502  853162 provision.go:143] copyHostCerts
	I0917 00:20:38.736535  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:20:38.736560  853162 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:20:38.736573  853162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:20:38.736639  853162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:20:38.736722  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:20:38.736740  853162 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:20:38.736746  853162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:20:38.736779  853162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:20:38.736839  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:20:38.736856  853162 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:20:38.736863  853162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:20:38.736886  853162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:20:38.736955  853162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903 san=[127.0.0.1 192.168.49.2 ha-472903 localhost minikube]
	I0917 00:20:39.436644  853162 provision.go:177] copyRemoteCerts
	I0917 00:20:39.436706  853162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:20:39.436743  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:39.454217  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:20:39.550482  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:20:39.550536  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:20:39.574507  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:20:39.574569  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 00:20:39.597445  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:20:39.597494  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:20:39.620037  853162 provision.go:87] duration metric: took 900.762496ms to configureAuth
	I0917 00:20:39.620063  853162 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:20:39.620256  853162 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:39.620269  853162 machine.go:96] duration metric: took 4.365996012s to provisionDockerMachine
	I0917 00:20:39.620277  853162 start.go:293] postStartSetup for "ha-472903" (driver="docker")
	I0917 00:20:39.620285  853162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:20:39.620327  853162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:20:39.620359  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:39.637486  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:20:39.732442  853162 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:20:39.735602  853162 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:20:39.735625  853162 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:20:39.735632  853162 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:20:39.735639  853162 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:20:39.735651  853162 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:20:39.735695  853162 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:20:39.735772  853162 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:20:39.735783  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0917 00:20:39.735865  853162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:20:39.744181  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:20:39.766920  853162 start.go:296] duration metric: took 146.623151ms for postStartSetup
	I0917 00:20:39.766994  853162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:20:39.767036  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:39.784014  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:20:39.873850  853162 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:20:39.877996  853162 fix.go:56] duration metric: took 4.894780852s for fixHost
	I0917 00:20:39.878018  853162 start.go:83] releasing machines lock for "ha-472903", held for 4.894820875s
	I0917 00:20:39.878073  853162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903
	I0917 00:20:39.894846  853162 ssh_runner.go:195] Run: cat /version.json
	I0917 00:20:39.894889  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:39.894941  853162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:20:39.895004  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:39.911644  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:20:39.912047  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:20:40.078918  853162 ssh_runner.go:195] Run: systemctl --version
	I0917 00:20:40.083613  853162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:20:40.087967  853162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:20:40.105776  853162 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:20:40.105849  853162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:20:40.114111  853162 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:20:40.114131  853162 start.go:495] detecting cgroup driver to use...
	I0917 00:20:40.114160  853162 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:20:40.114211  853162 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:20:40.126822  853162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:20:40.137466  853162 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:20:40.137505  853162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:20:40.149166  853162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:20:40.159495  853162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:20:40.221061  853162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:20:40.285639  853162 docker.go:234] disabling docker service ...
	I0917 00:20:40.285699  853162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:20:40.297018  853162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:20:40.307232  853162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:20:40.368438  853162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:20:40.428251  853162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:20:40.438823  853162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:20:40.454486  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:20:40.463862  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:20:40.473036  853162 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:20:40.473078  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:20:40.482289  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:20:40.491479  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:20:40.500319  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:20:40.509542  853162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:20:40.518237  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:20:40.527237  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:20:40.536342  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:20:40.545444  853162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:20:40.553515  853162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:20:40.561529  853162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:20:40.624992  853162 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:20:40.738118  853162 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:20:40.738194  853162 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:20:40.742637  853162 start.go:563] Will wait 60s for crictl version
	I0917 00:20:40.742675  853162 ssh_runner.go:195] Run: which crictl
	I0917 00:20:40.746234  853162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:20:40.779868  853162 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:20:40.779923  853162 ssh_runner.go:195] Run: containerd --version
	I0917 00:20:40.803110  853162 ssh_runner.go:195] Run: containerd --version
	I0917 00:20:40.828669  853162 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:20:40.829724  853162 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:20:40.846721  853162 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:20:40.850282  853162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:20:40.861751  853162 kubeadm.go:875] updating cluster {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socke
tVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:20:40.861890  853162 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:20:40.861944  853162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:20:40.894814  853162 containerd.go:627] all images are preloaded for containerd runtime.
	I0917 00:20:40.894873  853162 containerd.go:534] Images already preloaded, skipping extraction
	I0917 00:20:40.894952  853162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:20:40.926254  853162 containerd.go:627] all images are preloaded for containerd runtime.
	I0917 00:20:40.926274  853162 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:20:40.926282  853162 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0917 00:20:40.926376  853162 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:20:40.926447  853162 ssh_runner.go:195] Run: sudo crictl info
	I0917 00:20:40.958896  853162 cni.go:84] Creating CNI manager for ""
	I0917 00:20:40.958916  853162 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 00:20:40.958927  853162 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:20:40.958949  853162 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-472903 NodeName:ha-472903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:20:40.959064  853162 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-472903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:20:40.959093  853162 kube-vip.go:115] generating kube-vip config ...
	I0917 00:20:40.959125  853162 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:20:40.971220  853162 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:20:40.971326  853162 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:20:40.971379  853162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:20:40.979716  853162 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:20:40.979776  853162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:20:40.987725  853162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0917 00:20:41.004548  853162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:20:41.021543  853162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0917 00:20:41.038428  853162 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:20:41.055058  853162 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:20:41.058425  853162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:20:41.068877  853162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:20:41.128444  853162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:20:41.150361  853162 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.2
	I0917 00:20:41.150384  853162 certs.go:194] generating shared ca certs ...
	I0917 00:20:41.150406  853162 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:20:41.150585  853162 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:20:41.150648  853162 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:20:41.150665  853162 certs.go:256] generating profile certs ...
	I0917 00:20:41.150759  853162 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0917 00:20:41.150787  853162 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.7fe3fff7
	I0917 00:20:41.150803  853162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.7fe3fff7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0917 00:20:41.284811  853162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.7fe3fff7 ...
	I0917 00:20:41.284845  853162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.7fe3fff7: {Name:mk4e25c9a0c911945cadf30fa1e7c0959be02913 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:20:41.285054  853162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.7fe3fff7 ...
	I0917 00:20:41.285080  853162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.7fe3fff7: {Name:mkcad529b0b61f0240d944f5171e144f38a585c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:20:41.285202  853162 certs.go:381] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt.7fe3fff7 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt
	I0917 00:20:41.285351  853162 certs.go:385] copying /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.7fe3fff7 -> /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key
	I0917 00:20:41.285542  853162 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0917 00:20:41.285560  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:20:41.285573  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:20:41.285586  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:20:41.285597  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:20:41.285608  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:20:41.285619  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:20:41.285629  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:20:41.285639  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:20:41.285695  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:20:41.285728  853162 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:20:41.285737  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:20:41.285757  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:20:41.285778  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:20:41.285798  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:20:41.285835  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:20:41.285861  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:20:41.285875  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0917 00:20:41.285887  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0917 00:20:41.286470  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:20:41.314507  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:20:41.342333  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:20:41.368008  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:20:41.391633  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 00:20:41.414593  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:20:41.436880  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:20:41.459519  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:20:41.482263  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:20:41.505016  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:20:41.528016  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:20:41.550286  853162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:20:41.567962  853162 ssh_runner.go:195] Run: openssl version
	I0917 00:20:41.573075  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:20:41.582025  853162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:20:41.585261  853162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:20:41.585312  853162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:20:41.591623  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:20:41.599661  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:20:41.608629  853162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:20:41.611857  853162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:20:41.611901  853162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:20:41.618330  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:20:41.626513  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:20:41.635455  853162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:20:41.638741  853162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:20:41.638775  853162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:20:41.645252  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:20:41.654246  853162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:20:41.657902  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:20:41.664731  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:20:41.673674  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:20:41.682122  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:20:41.691803  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:20:41.701302  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:20:41.709987  853162 kubeadm.go:392] StartCluster: {Name:ha-472903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:20:41.710184  853162 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0917 00:20:41.710238  853162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:20:41.761776  853162 cri.go:89] found id: "b5592b8113e586d14715b024e8de6717e69df30cb94f4109a26ff3cab584226d"
	I0917 00:20:41.761804  853162 cri.go:89] found id: "c8a737e1be33c6e4b6e17f5359483d22d3eeb7ca2497546109c1097eb9343a7f"
	I0917 00:20:41.761810  853162 cri.go:89] found id: "2a56abb41f49d6755de68bb41070eee7c07fee5950b2584042a3850228b3c274"
	I0917 00:20:41.761815  853162 cri.go:89] found id: "aeea8f1127caf7117ade119a9e492104789925a531209d0aba3022cd18cb7ce1"
	I0917 00:20:41.761820  853162 cri.go:89] found id: "9fc46931c7aae5fea2058b723439b03184beee352ff9a7efcf262818181a635d"
	I0917 00:20:41.761825  853162 cri.go:89] found id: "b1c8344888d7deab1a3203bf9e16eefcb945905ec04b591acfb2fed3104948ec"
	I0917 00:20:41.761829  853162 cri.go:89] found id: "9685cc588651ced2d51ab783a94533fff6a60971435eaa8e11982eb715ef5350"
	I0917 00:20:41.761833  853162 cri.go:89] found id: "c3f8ee22fca28b303f553c3003d1000b80565b4147ba719401c8c5f61921ee41"
	I0917 00:20:41.761838  853162 cri.go:89] found id: "96d46a46d90937e1dc254cbb641e1f12887151faabbe128f2cc51a8a833fe573"
	I0917 00:20:41.761846  853162 cri.go:89] found id: "90b187ed887fae063d0e3d6e7f9316abbc50f1e7b9c092596b43a1c43c86e79d"
	I0917 00:20:41.761850  853162 cri.go:89] found id: ""
	I0917 00:20:41.761898  853162 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0917 00:20:41.786684  853162 cri.go:116] JSON = [{"ociVersion":"1.2.0","id":"6fdef4164f99d27df21ab6d287cff2c0f203d8a05664455c8f947bb4cd8426c9","pid":1059,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fdef4164f99d27df21ab6d287cff2c0f203d8a05664455c8f947bb4cd8426c9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fdef4164f99d27df21ab6d287cff2c0f203d8a05664455c8f947bb4cd8426c9/rootfs","created":"2025-09-17T00:20:41.755974156Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"6fdef4164f99d27df21ab6d287cff2c0f203d8a05664455c8f947bb4cd8426c9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-ha-472903_b57b4e111181f4c4157cb1fbf888e56c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-ha-472903","io.kubernete
s.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b57b4e111181f4c4157cb1fbf888e56c"},"owner":"root"},{"ociVersion":"1.2.0","id":"bcfa57226a9577e56259b5c1334edc792fe612c5f16df69faaae8a23e78e3d42","pid":1034,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bcfa57226a9577e56259b5c1334edc792fe612c5f16df69faaae8a23e78e3d42","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bcfa57226a9577e56259b5c1334edc792fe612c5f16df69faaae8a23e78e3d42/rootfs","created":"2025-09-17T00:20:41.732625278Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bcfa57226a9577e56259b5c1334edc792fe612c5f16df69faaae8a23e78e3d42","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-ha-472903_2227061675da4ed34922d350d8862f72","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.
sandbox-name":"etcd-ha-472903","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2227061675da4ed34922d350d8862f72"},"owner":"root"},{"ociVersion":"1.2.0","id":"e3f12221b67dba9fbc3b6d224220f55644b041f5627781f262665ff7739202c5","pid":1069,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3f12221b67dba9fbc3b6d224220f55644b041f5627781f262665ff7739202c5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3f12221b67dba9fbc3b6d224220f55644b041f5627781f262665ff7739202c5/rootfs","created":"2025-09-17T00:20:41.758018269Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"e3f12221b67dba9fbc3b6d224220f55644b041f5627781f262665ff7739202c5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-ha-472903_485e4849d897bc38a8d0e2cce5ff1
09b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-ha-472903","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"485e4849d897bc38a8d0e2cce5ff109b"},"owner":"root"},{"ociVersion":"1.2.0","id":"f5b6b96a38f0f8c3f34ca3c58bfdb904e4f2e4b6e569d9f7eb15a4f80fa3f460","pid":1068,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5b6b96a38f0f8c3f34ca3c58bfdb904e4f2e4b6e569d9f7eb15a4f80fa3f460","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5b6b96a38f0f8c3f34ca3c58bfdb904e4f2e4b6e569d9f7eb15a4f80fa3f460/rootfs","created":"2025-09-17T00:20:41.758888329Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f5b6b96a38f0f8c3f34ca3c58bfdb904e4f2e4b6e569d9f7eb15a4f80fa3f460","io.kubernetes.cri.sandbox-log-directory":"/var/log/
pods/kube-system_kube-vip-ha-472903_3d9fc459b47fceff3d235003420b1a14","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-vip-ha-472903","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3d9fc459b47fceff3d235003420b1a14"},"owner":"root"},{"ociVersion":"1.2.0","id":"fb2e9453a484b4c1536899e257ec541863f125a6f8ab0a620307af36a275f1f1","pid":1044,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb2e9453a484b4c1536899e257ec541863f125a6f8ab0a620307af36a275f1f1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb2e9453a484b4c1536899e257ec541863f125a6f8ab0a620307af36a275f1f1/rootfs","created":"2025-09-17T00:20:41.734135829Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"fb2e9453a484b4c1536899e257ec541863f125a6f8ab0a620307af36a275f1f1",
"io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-ha-472903_837a6c7c1a3b42ddee2d42c480d95c76","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-ha-472903","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"837a6c7c1a3b42ddee2d42c480d95c76"},"owner":"root"}]
	I0917 00:20:41.786840  853162 cri.go:126] list returned 5 containers
	I0917 00:20:41.786852  853162 cri.go:129] container: {ID:6fdef4164f99d27df21ab6d287cff2c0f203d8a05664455c8f947bb4cd8426c9 Status:created}
	I0917 00:20:41.786930  853162 cri.go:131] skipping 6fdef4164f99d27df21ab6d287cff2c0f203d8a05664455c8f947bb4cd8426c9 - not in ps
	I0917 00:20:41.786946  853162 cri.go:129] container: {ID:bcfa57226a9577e56259b5c1334edc792fe612c5f16df69faaae8a23e78e3d42 Status:running}
	I0917 00:20:41.786959  853162 cri.go:131] skipping bcfa57226a9577e56259b5c1334edc792fe612c5f16df69faaae8a23e78e3d42 - not in ps
	I0917 00:20:41.786967  853162 cri.go:129] container: {ID:e3f12221b67dba9fbc3b6d224220f55644b041f5627781f262665ff7739202c5 Status:created}
	I0917 00:20:41.786972  853162 cri.go:131] skipping e3f12221b67dba9fbc3b6d224220f55644b041f5627781f262665ff7739202c5 - not in ps
	I0917 00:20:41.786977  853162 cri.go:129] container: {ID:f5b6b96a38f0f8c3f34ca3c58bfdb904e4f2e4b6e569d9f7eb15a4f80fa3f460 Status:created}
	I0917 00:20:41.786995  853162 cri.go:131] skipping f5b6b96a38f0f8c3f34ca3c58bfdb904e4f2e4b6e569d9f7eb15a4f80fa3f460 - not in ps
	I0917 00:20:41.786999  853162 cri.go:129] container: {ID:fb2e9453a484b4c1536899e257ec541863f125a6f8ab0a620307af36a275f1f1 Status:running}
	I0917 00:20:41.787003  853162 cri.go:131] skipping fb2e9453a484b4c1536899e257ec541863f125a6f8ab0a620307af36a275f1f1 - not in ps
	I0917 00:20:41.787065  853162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:20:41.800337  853162 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:20:41.800363  853162 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:20:41.800467  853162 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:20:41.812736  853162 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:20:41.813158  853162 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-472903" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:20:41.813314  853162 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-749120/kubeconfig needs updating (will repair): [kubeconfig missing "ha-472903" cluster setting kubeconfig missing "ha-472903" context setting]
	I0917 00:20:41.813672  853162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:20:41.814257  853162 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:20:41.814963  853162 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:20:41.814974  853162 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:20:41.814988  853162 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:20:41.814996  853162 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:20:41.815001  853162 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:20:41.815006  853162 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:20:41.815574  853162 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:20:41.827605  853162 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:20:41.827639  853162 kubeadm.go:593] duration metric: took 27.268873ms to restartPrimaryControlPlane
	I0917 00:20:41.827648  853162 kubeadm.go:394] duration metric: took 117.678147ms to StartCluster
	I0917 00:20:41.827665  853162 settings.go:142] acquiring lock: {Name:mk6c1a5bee23e141aad5180323c16c47ed580ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:20:41.827744  853162 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:20:41.828336  853162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:20:41.828600  853162 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:20:41.828622  853162 start.go:241] waiting for startup goroutines ...
	I0917 00:20:41.828631  853162 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:20:41.828875  853162 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:41.831318  853162 out.go:179] * Enabled addons: 
	I0917 00:20:41.832539  853162 addons.go:514] duration metric: took 3.901981ms for enable addons: enabled=[]
	I0917 00:20:41.832574  853162 start.go:246] waiting for cluster config update ...
	I0917 00:20:41.832585  853162 start.go:255] writing updated cluster config ...
	I0917 00:20:41.834551  853162 out.go:203] 
	I0917 00:20:41.835727  853162 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:41.835814  853162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:20:41.838024  853162 out.go:179] * Starting "ha-472903-m02" control-plane node in "ha-472903" cluster
	I0917 00:20:41.839194  853162 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:20:41.841484  853162 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:20:41.842456  853162 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:20:41.842487  853162 cache.go:58] Caching tarball of preloaded images
	I0917 00:20:41.842550  853162 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:20:41.842589  853162 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:20:41.842599  853162 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:20:41.842736  853162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:20:41.865763  853162 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:20:41.865788  853162 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:20:41.865814  853162 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:20:41.865851  853162 start.go:360] acquireMachinesLock for ha-472903-m02: {Name:mk81d8c73856cf84ceff1767a1681f3f3cdab773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:20:41.865946  853162 start.go:364] duration metric: took 47.876µs to acquireMachinesLock for "ha-472903-m02"
	I0917 00:20:41.865986  853162 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:20:41.866000  853162 fix.go:54] fixHost starting: m02
	I0917 00:20:41.866330  853162 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:20:41.891138  853162 fix.go:112] recreateIfNeeded on ha-472903-m02: state=Stopped err=<nil>
	W0917 00:20:41.891172  853162 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:20:41.892758  853162 out.go:252] * Restarting existing docker container for "ha-472903-m02" ...
	I0917 00:20:41.892836  853162 cli_runner.go:164] Run: docker start ha-472903-m02
	I0917 00:20:42.162399  853162 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:20:42.182746  853162 kic.go:430] container "ha-472903-m02" state is running.
	I0917 00:20:42.183063  853162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:20:42.202634  853162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:20:42.202870  853162 machine.go:93] provisionDockerMachine start ...
	I0917 00:20:42.202934  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:42.222574  853162 main.go:141] libmachine: Using SSH client type: native
	I0917 00:20:42.222817  853162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33604 <nil> <nil>}
	I0917 00:20:42.222833  853162 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:20:42.223554  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57840->127.0.0.1:33604: read: connection reset by peer
	I0917 00:20:45.357833  853162 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0917 00:20:45.357866  853162 ubuntu.go:182] provisioning hostname "ha-472903-m02"
	I0917 00:20:45.357933  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:45.375079  853162 main.go:141] libmachine: Using SSH client type: native
	I0917 00:20:45.375292  853162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33604 <nil> <nil>}
	I0917 00:20:45.375306  853162 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m02 && echo "ha-472903-m02" | sudo tee /etc/hostname
	I0917 00:20:45.521107  853162 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-472903-m02
	
	I0917 00:20:45.521182  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:45.538481  853162 main.go:141] libmachine: Using SSH client type: native
	I0917 00:20:45.538686  853162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33604 <nil> <nil>}
	I0917 00:20:45.538702  853162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:20:45.686858  853162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:20:45.686886  853162 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:20:45.686908  853162 ubuntu.go:190] setting up certificates
	I0917 00:20:45.686921  853162 provision.go:84] configureAuth start
	I0917 00:20:45.686989  853162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:20:45.709916  853162 provision.go:143] copyHostCerts
	I0917 00:20:45.709958  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:20:45.710001  853162 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:20:45.710013  853162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:20:45.710094  853162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:20:45.710217  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:20:45.710252  853162 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:20:45.710259  853162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:20:45.710316  853162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:20:45.710546  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:20:45.710580  853162 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:20:45.710587  853162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:20:45.710634  853162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:20:45.710760  853162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.ha-472903-m02 san=[127.0.0.1 192.168.49.3 ha-472903-m02 localhost minikube]
	I0917 00:20:45.966214  853162 provision.go:177] copyRemoteCerts
	I0917 00:20:45.966288  853162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:20:45.966340  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:45.993644  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33604 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:20:46.100720  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:20:46.100794  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:20:46.126316  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:20:46.126385  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:20:46.153622  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:20:46.153689  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:20:46.186741  853162 provision.go:87] duration metric: took 499.804986ms to configureAuth
	I0917 00:20:46.186774  853162 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:20:46.187065  853162 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:46.187081  853162 machine.go:96] duration metric: took 3.984198071s to provisionDockerMachine
	I0917 00:20:46.187091  853162 start.go:293] postStartSetup for "ha-472903-m02" (driver="docker")
	I0917 00:20:46.187103  853162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:20:46.187161  853162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:20:46.187217  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:46.206089  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33604 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:20:46.307427  853162 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:20:46.311177  853162 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:20:46.311217  853162 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:20:46.311234  853162 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:20:46.311243  853162 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:20:46.311260  853162 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:20:46.311321  853162 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:20:46.311449  853162 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:20:46.311463  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /etc/ssl/certs/7527072.pem
	I0917 00:20:46.311587  853162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:20:46.320721  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:20:46.344942  853162 start.go:296] duration metric: took 157.832568ms for postStartSetup
	I0917 00:20:46.345031  853162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:20:46.345086  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:46.363814  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33604 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:20:46.481346  853162 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:20:46.496931  853162 fix.go:56] duration metric: took 4.630916775s for fixHost
	I0917 00:20:46.496962  853162 start.go:83] releasing machines lock for "ha-472903-m02", held for 4.630985515s
	I0917 00:20:46.497035  853162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m02
	I0917 00:20:46.537374  853162 out.go:179] * Found network options:
	I0917 00:20:46.538638  853162 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:20:46.539662  853162 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:20:46.539717  853162 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:20:46.539804  853162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:20:46.539839  853162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:20:46.539854  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:46.539932  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m02
	I0917 00:20:46.572153  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33604 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:20:46.578652  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33604 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903-m02/id_rsa Username:docker}
	I0917 00:20:46.683527  853162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:20:46.811849  853162 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:20:46.811930  853162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:20:46.822453  853162 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:20:46.822480  853162 start.go:495] detecting cgroup driver to use...
	I0917 00:20:46.822516  853162 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:20:46.822567  853162 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:20:46.838817  853162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:20:46.851233  853162 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:20:46.851313  853162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:20:46.866239  853162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:20:46.878018  853162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:20:46.975963  853162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:20:47.072966  853162 docker.go:234] disabling docker service ...
	I0917 00:20:47.073043  853162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:20:47.086710  853162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:20:47.097597  853162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:20:47.193710  853162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:20:47.309500  853162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:20:47.324077  853162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:20:47.342485  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:20:47.353031  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:20:47.365974  853162 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:20:47.366037  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:20:47.377743  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:20:47.390203  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:20:47.403773  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:20:47.415555  853162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:20:47.427011  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:20:47.439059  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:20:47.457755  853162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:20:47.476067  853162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:20:47.485475  853162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:20:47.493856  853162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:20:47.640430  853162 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:20:47.965745  853162 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:20:47.965818  853162 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:20:47.971657  853162 start.go:563] Will wait 60s for crictl version
	I0917 00:20:47.971722  853162 ssh_runner.go:195] Run: which crictl
	I0917 00:20:47.976692  853162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:20:48.022291  853162 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:20:48.022364  853162 ssh_runner.go:195] Run: containerd --version
	I0917 00:20:48.051041  853162 ssh_runner.go:195] Run: containerd --version
	I0917 00:20:48.079146  853162 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:20:48.080252  853162 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:20:48.081331  853162 cli_runner.go:164] Run: docker network inspect ha-472903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:20:48.097998  853162 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:20:48.102017  853162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:20:48.114518  853162 mustload.go:65] Loading cluster: ha-472903
	I0917 00:20:48.114732  853162 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:48.115010  853162 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:20:48.131516  853162 host.go:66] Checking if "ha-472903" exists ...
	I0917 00:20:48.131726  853162 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903 for IP: 192.168.49.3
	I0917 00:20:48.131736  853162 certs.go:194] generating shared ca certs ...
	I0917 00:20:48.131750  853162 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:20:48.131851  853162 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:20:48.131885  853162 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:20:48.131894  853162 certs.go:256] generating profile certs ...
	I0917 00:20:48.131975  853162 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key
	I0917 00:20:48.132015  853162 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key.d67fba4a
	I0917 00:20:48.132061  853162 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key
	I0917 00:20:48.132076  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:20:48.132094  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:20:48.132107  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:20:48.132119  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:20:48.132133  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:20:48.132146  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:20:48.132158  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:20:48.132170  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:20:48.132219  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:20:48.132247  853162 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:20:48.132256  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:20:48.132276  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:20:48.132298  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:20:48.132320  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:20:48.132366  853162 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:20:48.132391  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> /usr/share/ca-certificates/7527072.pem
	I0917 00:20:48.132426  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:20:48.132448  853162 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem -> /usr/share/ca-certificates/752707.pem
	I0917 00:20:48.132506  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903
	I0917 00:20:48.148623  853162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/ha-472903/id_rsa Username:docker}
	I0917 00:20:48.235670  853162 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:20:48.239525  853162 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:20:48.251602  853162 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:20:48.254862  853162 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:20:48.266491  853162 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:20:48.269542  853162 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:20:48.281769  853162 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:20:48.284982  853162 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:20:48.296565  853162 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:20:48.300092  853162 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:20:48.317833  853162 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:20:48.321346  853162 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 00:20:48.335793  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:20:48.363041  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:20:48.387573  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:20:48.411906  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:20:48.435720  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 00:20:48.458754  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:20:48.481880  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:20:48.504634  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:20:48.528137  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:20:48.550768  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:20:48.573488  853162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:20:48.596469  853162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:20:48.614086  853162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:20:48.630956  853162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:20:48.648596  853162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:20:48.667549  853162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:20:48.687524  853162 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 00:20:48.709133  853162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:20:48.732722  853162 ssh_runner.go:195] Run: openssl version
	I0917 00:20:48.739812  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:20:48.752002  853162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:20:48.755921  853162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:20:48.755972  853162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:20:48.764121  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:20:48.775444  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:20:48.786765  853162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:20:48.791603  853162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:20:48.791655  853162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:20:48.800012  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:20:48.810322  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:20:48.820434  853162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:20:48.823957  853162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:20:48.824004  853162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:20:48.830646  853162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:20:48.839222  853162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:20:48.842563  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:20:48.849046  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:20:48.855529  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:20:48.861801  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:20:48.868136  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:20:48.874594  853162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:20:48.880788  853162 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0917 00:20:48.880874  853162 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-472903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-472903 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:20:48.880898  853162 kube-vip.go:115] generating kube-vip config ...
	I0917 00:20:48.880935  853162 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:20:48.893271  853162 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:20:48.893323  853162 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:20:48.893358  853162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:20:48.901765  853162 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:20:48.901815  853162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:20:48.910619  853162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 00:20:48.929104  853162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:20:48.946566  853162 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:20:48.963876  853162 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:20:48.967541  853162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:20:48.978442  853162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:20:49.084866  853162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:20:49.097751  853162 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:20:49.098032  853162 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:49.099974  853162 out.go:179] * Verifying Kubernetes components...
	I0917 00:20:49.101123  853162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:20:49.204289  853162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:20:49.220147  853162 kapi.go:59] client config for ha-472903: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/client.key", CAFile:"/home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:20:49.220241  853162 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:20:49.220551  853162 node_ready.go:35] waiting up to 6m0s for node "ha-472903-m02" to be "Ready" ...
	I0917 00:20:49.229156  853162 node_ready.go:49] node "ha-472903-m02" is "Ready"
	I0917 00:20:49.229189  853162 node_ready.go:38] duration metric: took 8.616304ms for node "ha-472903-m02" to be "Ready" ...
	I0917 00:20:49.229204  853162 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:20:49.229258  853162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:20:49.240716  853162 api_server.go:72] duration metric: took 142.915997ms to wait for apiserver process to appear ...
	I0917 00:20:49.240740  853162 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:20:49.240759  853162 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:20:49.245513  853162 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:20:49.246360  853162 api_server.go:141] control plane version: v1.34.0
	I0917 00:20:49.246384  853162 api_server.go:131] duration metric: took 5.636359ms to wait for apiserver health ...
	I0917 00:20:49.246392  853162 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:20:49.252346  853162 system_pods.go:59] 24 kube-system pods found
	I0917 00:20:49.252376  853162 system_pods.go:61] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:20:49.252387  853162 system_pods.go:61] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:20:49.252399  853162 system_pods.go:61] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:20:49.252408  853162 system_pods.go:61] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:20:49.252425  853162 system_pods.go:61] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running
	I0917 00:20:49.252435  853162 system_pods.go:61] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:20:49.252442  853162 system_pods.go:61] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:20:49.252451  853162 system_pods.go:61] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:20:49.252456  853162 system_pods.go:61] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:20:49.252462  853162 system_pods.go:61] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:20:49.252466  853162 system_pods.go:61] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running
	I0917 00:20:49.252474  853162 system_pods.go:61] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:20:49.252481  853162 system_pods.go:61] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:20:49.252485  853162 system_pods.go:61] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running
	I0917 00:20:49.252493  853162 system_pods.go:61] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:20:49.252496  853162 system_pods.go:61] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:20:49.252513  853162 system_pods.go:61] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:20:49.252520  853162 system_pods.go:61] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:20:49.252526  853162 system_pods.go:61] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:20:49.252533  853162 system_pods.go:61] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running
	I0917 00:20:49.252537  853162 system_pods.go:61] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:20:49.252540  853162 system_pods.go:61] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:20:49.252543  853162 system_pods.go:61] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:20:49.252547  853162 system_pods.go:61] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:20:49.252551  853162 system_pods.go:74] duration metric: took 6.154731ms to wait for pod list to return data ...
	I0917 00:20:49.252558  853162 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:20:49.254869  853162 default_sa.go:45] found service account: "default"
	I0917 00:20:49.254888  853162 default_sa.go:55] duration metric: took 2.323687ms for default service account to be created ...
	I0917 00:20:49.254897  853162 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:20:49.260719  853162 system_pods.go:86] 24 kube-system pods found
	I0917 00:20:49.260744  853162 system_pods.go:89] "coredns-66bc5c9577-c94hz" [774f1c0f-9759-44c2-957d-5a97670f951b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:20:49.260753  853162 system_pods.go:89] "coredns-66bc5c9577-qn8m7" [1c58205e-e865-42fc-8282-23e3d779ee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:20:49.260765  853162 system_pods.go:89] "etcd-ha-472903" [e333577b-838c-41c5-ba86-ce3d7de57077] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:20:49.260777  853162 system_pods.go:89] "etcd-ha-472903-m02" [8a478117-c53d-4621-aa09-be3c16d386c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:20:49.260786  853162 system_pods.go:89] "etcd-ha-472903-m03" [73e10c6a-306a-4c7e-b816-1b8d6b815292] Running
	I0917 00:20:49.260793  853162 system_pods.go:89] "kindnet-lh7dv" [1da43ca7-9af7-4573-9cdc-fd21b098ca2c] Running
	I0917 00:20:49.260805  853162 system_pods.go:89] "kindnet-q7c7s" [85db5b30-8ace-4bb0-8886-32b9ca032b2b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:20:49.260813  853162 system_pods.go:89] "kindnet-x6twd" [f2346479-1adb-4bc7-af07-971525be2b05] Running
	I0917 00:20:49.260819  853162 system_pods.go:89] "kube-apiserver-ha-472903" [e2844751-3962-4753-8b63-79c124dd5fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:20:49.260827  853162 system_pods.go:89] "kube-apiserver-ha-472903-m02" [6675419c-7693-4970-b73c-8415bcda1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:20:49.260832  853162 system_pods.go:89] "kube-apiserver-ha-472903-m03" [8c79e747-e193-4471-a4be-ab4d604998ad] Running
	I0917 00:20:49.260840  853162 system_pods.go:89] "kube-controller-manager-ha-472903" [be5cfd0b-a3b9-44cf-8cde-74e9eb89c738] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:20:49.260845  853162 system_pods.go:89] "kube-controller-manager-ha-472903-m02" [54f6e7e0-0a78-4651-b24f-f902c6bf7efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:20:49.260853  853162 system_pods.go:89] "kube-controller-manager-ha-472903-m03" [d48b1c84-6653-43e8-9322-ae2c64471dde] Running
	I0917 00:20:49.260859  853162 system_pods.go:89] "kube-proxy-58lkb" [32fed88c-ce9e-4536-8e96-04ab5b4f5d42] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:20:49.260862  853162 system_pods.go:89] "kube-proxy-d4m8f" [d4a70eec-48a7-4ea6-871a-1b5ed2beca9a] Running
	I0917 00:20:49.260869  853162 system_pods.go:89] "kube-proxy-kn6nb" [53644856-9fda-4556-bbb5-12254c4b00a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:20:49.260880  853162 system_pods.go:89] "kube-scheduler-ha-472903" [e949de65-b218-45cb-abe7-79b704aae473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:20:49.260893  853162 system_pods.go:89] "kube-scheduler-ha-472903-m02" [08b5a4f0-3aa6-4a82-b171-afc1eafcd4c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:20:49.260903  853162 system_pods.go:89] "kube-scheduler-ha-472903-m03" [7b954c46-3b8c-47e7-b10c-a20dd936d45c] Running
	I0917 00:20:49.260912  853162 system_pods.go:89] "kube-vip-ha-472903" [d3849ebb-d365-491f-955c-2a7ca580290b] Running
	I0917 00:20:49.260918  853162 system_pods.go:89] "kube-vip-ha-472903-m02" [748f096f-bec6-4de8-92f0-128db827bdd6] Running
	I0917 00:20:49.260922  853162 system_pods.go:89] "kube-vip-ha-472903-m03" [62b1f237-95c3-4a40-b3b7-a519f7c80ad4] Running
	I0917 00:20:49.260925  853162 system_pods.go:89] "storage-provisioner" [ac7f283e-4d28-46cf-a519-bd227237d5e7] Running
	I0917 00:20:49.260932  853162 system_pods.go:126] duration metric: took 6.029773ms to wait for k8s-apps to be running ...
	I0917 00:20:49.260939  853162 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:20:49.260991  853162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:20:49.275832  853162 system_svc.go:56] duration metric: took 14.884993ms WaitForService to wait for kubelet
	I0917 00:20:49.275863  853162 kubeadm.go:578] duration metric: took 178.064338ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:20:49.275886  853162 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:20:49.278927  853162 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:20:49.278956  853162 node_conditions.go:123] node cpu capacity is 8
	I0917 00:20:49.278968  853162 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:20:49.278975  853162 node_conditions.go:123] node cpu capacity is 8
	I0917 00:20:49.278979  853162 node_conditions.go:105] duration metric: took 3.087442ms to run NodePressure ...
	I0917 00:20:49.278989  853162 start.go:241] waiting for startup goroutines ...
	I0917 00:20:49.279011  853162 start.go:255] writing updated cluster config ...
	I0917 00:20:49.280899  853162 out.go:203] 
	I0917 00:20:49.282298  853162 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:49.282399  853162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:20:49.283991  853162 out.go:179] * Starting "ha-472903-m04" worker node in "ha-472903" cluster
	I0917 00:20:49.285326  853162 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:20:49.286460  853162 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:20:49.287455  853162 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:20:49.287476  853162 cache.go:58] Caching tarball of preloaded images
	I0917 00:20:49.287528  853162 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:20:49.287571  853162 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:20:49.287584  853162 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0917 00:20:49.287709  853162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:20:49.307816  853162 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:20:49.307837  853162 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:20:49.307857  853162 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:20:49.307889  853162 start.go:360] acquireMachinesLock for ha-472903-m04: {Name:mkdbbd0d5b3cd7ad4b13d37f2d562d6d6421c5de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:20:49.307967  853162 start.go:364] duration metric: took 58.431µs to acquireMachinesLock for "ha-472903-m04"
	I0917 00:20:49.307989  853162 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:20:49.307997  853162 fix.go:54] fixHost starting: m04
	I0917 00:20:49.308294  853162 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:20:49.327243  853162 fix.go:112] recreateIfNeeded on ha-472903-m04: state=Stopped err=<nil>
	W0917 00:20:49.327267  853162 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:20:49.328775  853162 out.go:252] * Restarting existing docker container for "ha-472903-m04" ...
	I0917 00:20:49.328840  853162 cli_runner.go:164] Run: docker start ha-472903-m04
	I0917 00:20:49.560781  853162 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:20:49.579860  853162 kic.go:430] container "ha-472903-m04" state is running.
	I0917 00:20:49.580191  853162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472903-m04
	I0917 00:20:49.599441  853162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/ha-472903/config.json ...
	I0917 00:20:49.599717  853162 machine.go:93] provisionDockerMachine start ...
	I0917 00:20:49.599798  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	I0917 00:20:49.619815  853162 main.go:141] libmachine: Using SSH client type: native
	I0917 00:20:49.620099  853162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33609 <nil> <nil>}
	I0917 00:20:49.620117  853162 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:20:49.620757  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48712->127.0.0.1:33609: read: connection reset by peer
	I0917 00:20:52.657577  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:55.693322  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:58.729135  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:01.764765  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:04.801662  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:07.838629  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:10.874479  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:13.910566  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:16.946663  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:19.982838  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:23.019051  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:26.055606  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:29.092586  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:32.129307  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:35.165023  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:38.201362  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:41.238243  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:44.274894  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:47.311231  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:50.346548  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:53.382155  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:56.417647  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:59.454124  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:02.489943  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:05.524569  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:08.561054  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:11.596972  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:14.633851  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:17.670632  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:20.706720  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:23.742474  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:26.778365  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:29.814947  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:32.849507  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:35.884812  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:38.921115  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:41.957993  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:44.993151  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:48.030381  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:51.065019  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:54.102634  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:57.139538  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:00.175672  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:03.211394  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:06.246468  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:09.281382  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:12.316511  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:15.352315  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:18.389406  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:21.425527  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:24.462259  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:27.499604  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:30.534707  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:33.570156  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:36.607519  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:39.644455  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:42.680507  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:45.716317  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:48.752201  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:51.753566  853162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:23:51.753615  853162 ubuntu.go:182] provisioning hostname "ha-472903-m04"
	I0917 00:23:51.753683  853162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472903-m04
	I0917 00:23:51.772834  853162 main.go:141] libmachine: Using SSH client type: native
	I0917 00:23:51.773066  853162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33609 <nil> <nil>}
	I0917 00:23:51.773078  853162 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-472903-m04 && echo "ha-472903-m04" | sudo tee /etc/hostname
	I0917 00:23:51.808205  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:54.844740  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:57.881062  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:00.916368  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:03.951873  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:06.988244  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:10.023269  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:13.059288  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:16.095369  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:19.130941  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:22.165674  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:25.200877  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:28.237226  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:31.272916  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:34.309792  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:37.346111  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:40.381702  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:43.416825  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:46.453150  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:49.488904  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:52.524230  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:55.559006  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:58.595706  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:01.631780  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:04.668852  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:07.705815  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:10.741883  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:13.778174  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:16.814616  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:19.851759  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:22.887953  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:25.923578  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:28.960659  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:31.995533  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:35.030980  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:38.068282  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:41.103292  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:44.139821  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:47.174864  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:50.210043  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:53.245796  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:56.282220  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:59.318344  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:02.354126  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:05.389652  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:08.425132  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:11.460937  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:14.496551  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:17.533674  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:20.569685  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:23.606874  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:26.643570  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:29.680754  853162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f8214b09c2621       6e38f40d628db       4 minutes ago       Running             storage-provisioner       6                   1541c21735f2b       storage-provisioner
	51e9736cd26ba       409467f978b4a       5 minutes ago       Running             kindnet-cni               2                   b977672b0f413       kindnet-lh7dv
	95500c2423776       8c811b4aec35f       5 minutes ago       Running             busybox                   2                   14de620d7c5f7       busybox-7b57f96db7-6hrm6
	621e9365b6a1e       52546a367cc9e       5 minutes ago       Running             coredns                   2                   e72334b1d3866       coredns-66bc5c9577-qn8m7
	b934b3682af6c       52546a367cc9e       5 minutes ago       Running             coredns                   2                   a709eb7709714       coredns-66bc5c9577-c94hz
	1bbd0e4ad154d       6e38f40d628db       5 minutes ago       Exited              storage-provisioner       5                   1541c21735f2b       storage-provisioner
	940788d22241f       df0860106674d       5 minutes ago       Running             kube-proxy                2                   daeda75e08b16       kube-proxy-d4m8f
	20cc0ea62114e       765655ea60781       5 minutes ago       Running             kube-vip                  2                   f5b6b96a38f0f       kube-vip-ha-472903
	bb75d704c82d6       46169d968e920       5 minutes ago       Running             kube-scheduler            2                   6fdef4164f99d       kube-scheduler-ha-472903
	18287a3e85550       a0af72f2ec6d6       5 minutes ago       Running             kube-controller-manager   2                   e3f12221b67db       kube-controller-manager-ha-472903
	cfbd1f241e75b       90550c43ad2bc       5 minutes ago       Running             kube-apiserver            2                   fb2e9453a484b       kube-apiserver-ha-472903
	49d2da2c294ff       5f1f5298c888d       5 minutes ago       Running             etcd                      2                   bcfa57226a957       etcd-ha-472903
	b5592b8113e58       765655ea60781       6 minutes ago       Exited              kube-vip                  1                   1bc9d50f267a3       kube-vip-ha-472903
	2a56abb41f49d       409467f978b4a       12 minutes ago      Exited              kindnet-cni               1                   2c028f64de7ca       kindnet-lh7dv
	b4ccada04ba90       8c811b4aec35f       12 minutes ago      Exited              busybox                   1                   8196f32c07b91       busybox-7b57f96db7-6hrm6
	aeea8f1127caf       52546a367cc9e       12 minutes ago      Exited              coredns                   1                   91d98fd766ced       coredns-66bc5c9577-qn8m7
	9fc46931c7aae       52546a367cc9e       12 minutes ago      Exited              coredns                   1                   5e2ab87af7d54       coredns-66bc5c9577-c94hz
	b1c8344888d7d       df0860106674d       12 minutes ago      Exited              kube-proxy                1                   b64b7dfe57cfc       kube-proxy-d4m8f
	9685cc588651c       46169d968e920       13 minutes ago      Exited              kube-scheduler            1                   50f4cca94a4f8       kube-scheduler-ha-472903
	c3f8ee22fca28       a0af72f2ec6d6       13 minutes ago      Exited              kube-controller-manager   1                   811d527e0af1e       kube-controller-manager-ha-472903
	96d46a46d9093       90550c43ad2bc       13 minutes ago      Exited              kube-apiserver            1                   9fcac3d988698       kube-apiserver-ha-472903
	90b187ed887fa       5f1f5298c888d       13 minutes ago      Exited              etcd                      1                   070db27b7a5dd       etcd-ha-472903
	
	
	==> containerd <==
	Sep 17 00:20:47 ha-472903 containerd[476]: time="2025-09-17T00:20:47.889762267Z" level=info msg="StartContainer for \"621e9365b6a1e7932d0be0fd54e8182870b9e3b5410b628a202c202c4e9078c7\" returns successfully"
	Sep 17 00:20:47 ha-472903 containerd[476]: time="2025-09-17T00:20:47.943598322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-lh7dv,Uid:1da43ca7-9af7-4573-9cdc-fd21b098ca2c,Namespace:kube-system,Attempt:2,} returns sandbox id \"b977672b0f41328d201a4bb34f0beb9865da8a8ef61ebed0250ab553d215d554\""
	Sep 17 00:20:47 ha-472903 containerd[476]: time="2025-09-17T00:20:47.943621626Z" level=info msg="StartContainer for \"95500c2423776dae6e082dd09e971c84eb649d9eb57a5dcd43405eb7aa107e09\" returns successfully"
	Sep 17 00:20:47 ha-472903 containerd[476]: time="2025-09-17T00:20:47.949469190Z" level=info msg="CreateContainer within sandbox \"b977672b0f41328d201a4bb34f0beb9865da8a8ef61ebed0250ab553d215d554\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Sep 17 00:20:47 ha-472903 containerd[476]: time="2025-09-17T00:20:47.963708049Z" level=info msg="CreateContainer within sandbox \"b977672b0f41328d201a4bb34f0beb9865da8a8ef61ebed0250ab553d215d554\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"51e9736cd26baef27af8f66044dbbb2f6c24003a902483ddafd9799152c3c945\""
	Sep 17 00:20:47 ha-472903 containerd[476]: time="2025-09-17T00:20:47.964973066Z" level=info msg="StartContainer for \"51e9736cd26baef27af8f66044dbbb2f6c24003a902483ddafd9799152c3c945\""
	Sep 17 00:20:48 ha-472903 containerd[476]: time="2025-09-17T00:20:48.044933070Z" level=info msg="StartContainer for \"51e9736cd26baef27af8f66044dbbb2f6c24003a902483ddafd9799152c3c945\" returns successfully"
	Sep 17 00:21:17 ha-472903 containerd[476]: time="2025-09-17T00:21:17.850266871Z" level=info msg="received exit event container_id:\"1bbd0e4ad154dd22e23a6b6b466995ac2dbd1dd27ce2f2d3678d730222c98a17\"  id:\"1bbd0e4ad154dd22e23a6b6b466995ac2dbd1dd27ce2f2d3678d730222c98a17\"  pid:1806  exit_status:1  exited_at:{seconds:1758068477  nanos:849762129}"
	Sep 17 00:21:17 ha-472903 containerd[476]: time="2025-09-17T00:21:17.867989510Z" level=info msg="shim disconnected" id=1bbd0e4ad154dd22e23a6b6b466995ac2dbd1dd27ce2f2d3678d730222c98a17 namespace=k8s.io
	Sep 17 00:21:17 ha-472903 containerd[476]: time="2025-09-17T00:21:17.868021096Z" level=warning msg="cleaning up after shim disconnected" id=1bbd0e4ad154dd22e23a6b6b466995ac2dbd1dd27ce2f2d3678d730222c98a17 namespace=k8s.io
	Sep 17 00:21:17 ha-472903 containerd[476]: time="2025-09-17T00:21:17.868030418Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 17 00:21:18 ha-472903 containerd[476]: time="2025-09-17T00:21:18.398063856Z" level=info msg="RemoveContainer for \"c8a737e1be33c6e4b6e17f5359483d22d3eeb7ca2497546109c1097eb9343a7f\""
	Sep 17 00:21:18 ha-472903 containerd[476]: time="2025-09-17T00:21:18.402976884Z" level=info msg="RemoveContainer for \"c8a737e1be33c6e4b6e17f5359483d22d3eeb7ca2497546109c1097eb9343a7f\" returns successfully"
	Sep 17 00:21:32 ha-472903 containerd[476]: time="2025-09-17T00:21:32.233881716Z" level=info msg="CreateContainer within sandbox \"1541c21735f2b9bd3802a75a7db16bff5ee90231594cb6474da196c01a8d451e\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:6,}"
	Sep 17 00:21:32 ha-472903 containerd[476]: time="2025-09-17T00:21:32.245947172Z" level=info msg="CreateContainer within sandbox \"1541c21735f2b9bd3802a75a7db16bff5ee90231594cb6474da196c01a8d451e\" for &ContainerMetadata{Name:storage-provisioner,Attempt:6,} returns container id \"f8214b09c2621c0418d9c58f3d89fe22f061e14bb833044f218fbe84f42473eb\""
	Sep 17 00:21:32 ha-472903 containerd[476]: time="2025-09-17T00:21:32.246500838Z" level=info msg="StartContainer for \"f8214b09c2621c0418d9c58f3d89fe22f061e14bb833044f218fbe84f42473eb\""
	Sep 17 00:21:32 ha-472903 containerd[476]: time="2025-09-17T00:21:32.299139833Z" level=info msg="StartContainer for \"f8214b09c2621c0418d9c58f3d89fe22f061e14bb833044f218fbe84f42473eb\" returns successfully"
	Sep 17 00:21:41 ha-472903 containerd[476]: time="2025-09-17T00:21:41.348029797Z" level=info msg="StopPodSandbox for \"fe7a407d2eb97d648dbca1e85a5587efe15f437488c6dd3ef99c90d4b44796b2\""
	Sep 17 00:21:41 ha-472903 containerd[476]: time="2025-09-17T00:21:41.348172510Z" level=info msg="TearDown network for sandbox \"fe7a407d2eb97d648dbca1e85a5587efe15f437488c6dd3ef99c90d4b44796b2\" successfully"
	Sep 17 00:21:41 ha-472903 containerd[476]: time="2025-09-17T00:21:41.348186475Z" level=info msg="StopPodSandbox for \"fe7a407d2eb97d648dbca1e85a5587efe15f437488c6dd3ef99c90d4b44796b2\" returns successfully"
	Sep 17 00:21:41 ha-472903 containerd[476]: time="2025-09-17T00:21:41.348545473Z" level=info msg="RemovePodSandbox for \"fe7a407d2eb97d648dbca1e85a5587efe15f437488c6dd3ef99c90d4b44796b2\""
	Sep 17 00:21:41 ha-472903 containerd[476]: time="2025-09-17T00:21:41.348607622Z" level=info msg="Forcibly stopping sandbox \"fe7a407d2eb97d648dbca1e85a5587efe15f437488c6dd3ef99c90d4b44796b2\""
	Sep 17 00:21:41 ha-472903 containerd[476]: time="2025-09-17T00:21:41.348700807Z" level=info msg="TearDown network for sandbox \"fe7a407d2eb97d648dbca1e85a5587efe15f437488c6dd3ef99c90d4b44796b2\" successfully"
	Sep 17 00:21:41 ha-472903 containerd[476]: time="2025-09-17T00:21:41.353958210Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe7a407d2eb97d648dbca1e85a5587efe15f437488c6dd3ef99c90d4b44796b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 17 00:21:41 ha-472903 containerd[476]: time="2025-09-17T00:21:41.354050291Z" level=info msg="RemovePodSandbox \"fe7a407d2eb97d648dbca1e85a5587efe15f437488c6dd3ef99c90d4b44796b2\" returns successfully"
	
	
	==> coredns [621e9365b6a1e7932d0be0fd54e8182870b9e3b5410b628a202c202c4e9078c7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57507 - 11253 "HINFO IN 7188832170103609277.3422517168116823213. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014185616s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [9fc46931c7aae5fea2058b723439b03184beee352ff9a7efcf262818181a635d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60683 - 9436 "HINFO IN 7751308179169184926.6829077423459472962. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019258685s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [aeea8f1127caf7117ade119a9e492104789925a531209d0aba3022cd18cb7ce1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40200 - 1569 "HINFO IN 6158707635578374570.8737516254824064952. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.057247461s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b934b3682af6c15b3afdd0b4bb9d6aedbf1a383e01439af70ca742961fcf08e5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54067 - 36488 "HINFO IN 3956355031104215105.4564410618860442453. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017644245s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-472903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_56_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:56:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:26:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:20:46 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:20:46 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:20:46 +0000   Tue, 16 Sep 2025 23:56:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:20:46 +0000   Tue, 16 Sep 2025 23:56:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-472903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 7cc22c1a44b0433eb39fd662150abe71
	  System UUID:                695af4c7-28fb-4299-9454-75db3262ca2c
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6hrm6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 coredns-66bc5c9577-c94hz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29m
	  kube-system                 coredns-66bc5c9577-qn8m7             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29m
	  kube-system                 etcd-ha-472903                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29m
	  kube-system                 kindnet-lh7dv                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m
	  kube-system                 kube-apiserver-ha-472903             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-ha-472903    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-d4m8f                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-ha-472903             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-vip-ha-472903                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m39s                  kube-proxy       
	  Normal  Starting                 29m                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)      kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)      kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)      kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  Starting                 29m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     29m                    kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m                    kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                    kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           29m                    node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           29m                    node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           28m                    node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  Starting                 5m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m50s (x8 over 5m50s)  kubelet          Node ha-472903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m50s (x8 over 5m50s)  kubelet          Node ha-472903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m50s (x7 over 5m50s)  kubelet          Node ha-472903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m41s                  node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	  Normal  RegisteredNode           5m41s                  node-controller  Node ha-472903 event: Registered Node ha-472903 in Controller
	
	
	Name:               ha-472903-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-472903-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-472903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472903-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:26:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:25:22 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:25:22 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:25:22 +0000   Tue, 16 Sep 2025 23:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:25:22 +0000   Tue, 16 Sep 2025 23:57:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-472903-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 183afc89f8c0450aaa9b4a942a0cbf3c
	  System UUID:                85df9db8-f21a-4038-9f8c-4cc1d81dc0d5
	  Boot ID:                    4acfd7d3-9698-436f-b4ae-efdf6bd483d5
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-4jfjt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 etcd-ha-472903-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29m
	  kube-system                 kindnet-q7c7s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m
	  kube-system                 kube-apiserver-ha-472903-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-ha-472903-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-58lkb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-ha-472903-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-vip-ha-472903-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m31s                  kube-proxy       
	  Normal  Starting                 29m                    kube-proxy       
	  Normal  RegisteredNode           29m                    node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           29m                    node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           28m                    node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-472903-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-472903-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-472903-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           14m                    node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-472903-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-472903-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-472903-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                    node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  Starting                 5m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m48s (x8 over 5m48s)  kubelet          Node ha-472903-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m48s (x8 over 5m48s)  kubelet          Node ha-472903-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m48s (x7 over 5m48s)  kubelet          Node ha-472903-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m41s                  node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	  Normal  RegisteredNode           5m41s                  node-controller  Node ha-472903-m02 event: Registered Node ha-472903-m02 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 e8 75 4b 01 57 08 06
	[  +0.025562] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[ +13.150028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 5c f0 26 cd ba 08 06
	[  +0.000341] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 20 90 fb f5 d8 08 06
	[ +28.639349] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 26 63 8d db 90 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 3d a9 85 b1 bd 08 06
	[  +0.836892] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 cc 9b 52 38 94 08 06
	[  +0.080327] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	[Sep16 23:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[ +20.325550] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 39 4b 41 df 63 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 3b 76 df aa 6a 08 06
	[  +8.925776] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e cd c1 f7 dc c8 08 06
	[  +0.000373] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3e 79 8e c8 7e 37 08 06
	
	
	==> etcd [49d2da2c294ffce8aaab7950c7c7bac9da1f6e2bd6748a602f3c2da98747e64d] <==
	{"level":"warn","ts":"2025-09-17T00:20:45.897603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:45.907656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:45.914905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:45.922622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:45.929295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:45.936511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:45.943875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:45.951610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:45.959819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:45.967801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:45.975821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:45.985341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:45.992802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:46.000469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:46.008121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:46.016590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:46.023121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:46.034571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:46.042041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:46.049229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:46.056185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:46.068745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:46.075500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:46.082165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:20:46.140050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38932","server-name":"","error":"EOF"}
	
	
	==> etcd [90b187ed887fae063d0e3d6e7f9316abbc50f1e7b9c092596b43a1c43c86e79d] <==
	{"level":"info","ts":"2025-09-17T00:20:21.649346Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-09-17T00:20:21.649366Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-09-17T00:20:21.931516Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017930715452,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-17T00:20:21.975976Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"1.991108218s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-09-17T00:20:21.976137Z","caller":"traceutil/trace.go:172","msg":"trace[552472908] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"1.991279809s","start":"2025-09-17T00:20:19.984841Z","end":"2025-09-17T00:20:21.976120Z","steps":["trace[552472908] 'agreement among raft nodes before linearized reading'  (duration: 1.991081425s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:20:21.976193Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:20:19.984824Z","time spent":"1.991350349s","remote":"127.0.0.1:49808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2025/09/17 00:20:21 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-09-17T00:20:22.423155Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"2.969960954s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-09-17T00:20:22.423232Z","caller":"traceutil/trace.go:172","msg":"trace[978228511] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"2.970056208s","start":"2025-09-17T00:20:19.453161Z","end":"2025-09-17T00:20:22.423217Z","steps":["trace[978228511] 'agreement among raft nodes before linearized reading'  (duration: 2.969959113s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:20:22.423281Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:20:19.453146Z","time spent":"2.970119154s","remote":"127.0.0.1:57080","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	2025/09/17 00:20:22 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-09-17T00:20:22.432236Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017930715452,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-17T00:20:22.932466Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040017930715452,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-09-17T00:20:22.932622Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-17T00:20:22.932662Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"ha-472903","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-17T00:20:22.932747Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-09-17T00:20:22.947283Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"3.603126964s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" limit:1 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-09-17T00:20:22.947353Z","caller":"traceutil/trace.go:172","msg":"trace[1493172138] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; }","duration":"3.603215213s","start":"2025-09-17T00:20:19.344124Z","end":"2025-09-17T00:20:22.947339Z","steps":["trace[1493172138] 'agreement among raft nodes before linearized reading'  (duration: 3.603125663s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:20:22.947427Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:20:19.344108Z","time spent":"3.603292406s","remote":"127.0.0.1:57434","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":0,"response size":0,"request content":"key:\"/registry/leases/kube-system/plndr-cp-lock\" limit:1 "}
	2025/09/17 00:20:22 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2025-09-17T00:20:22.947024Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-09-17T00:20:22.948106Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-09-17T00:20:22.948137Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 5744] sent MsgPreVote request to 3aa85cdcd5e5557b at term 3"}
	{"level":"info","ts":"2025-09-17T00:20:22.948181Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-09-17T00:20:22.948193Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	
	
	==> kernel <==
	 00:26:31 up  3:08,  0 users,  load average: 0.21, 0.45, 0.71
	Linux ha-472903 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [2a56abb41f49d6755de68bb41070eee7c07fee5950b2584042a3850228b3c274] <==
	I0917 00:19:37.390101       1 main.go:301] handling current node
	I0917 00:19:37.390118       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:19:37.390123       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:19:37.390327       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:19:37.390339       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:19:47.397482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:19:47.397526       1 main.go:301] handling current node
	I0917 00:19:47.397543       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:19:47.397548       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:19:47.397996       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:19:47.398026       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:19:57.390658       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:19:57.390704       1 main.go:301] handling current node
	I0917 00:19:57.390723       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:19:57.390729       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:19:57.390896       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:19:57.391108       1 main.go:324] Node ha-472903-m03 has CIDR [10.244.2.0/24] 
	I0917 00:20:07.391508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:20:07.391558       1 main.go:301] handling current node
	I0917 00:20:07.391577       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:20:07.391584       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:20:17.393522       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:20:17.393586       1 main.go:301] handling current node
	I0917 00:20:17.393611       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:20:17.393620       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [51e9736cd26baef27af8f66044dbbb2f6c24003a902483ddafd9799152c3c945] <==
	I0917 00:25:28.460580       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:25:38.456601       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:25:38.456633       1 main.go:301] handling current node
	I0917 00:25:38.456649       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:25:38.456654       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:25:48.463894       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:25:48.463937       1 main.go:301] handling current node
	I0917 00:25:48.463958       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:25:48.463966       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:25:58.460912       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:25:58.460949       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:25:58.461153       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:25:58.461167       1 main.go:301] handling current node
	I0917 00:26:08.464186       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:26:08.464229       1 main.go:301] handling current node
	I0917 00:26:08.464244       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:26:08.464249       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:26:18.464494       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:26:18.464528       1 main.go:301] handling current node
	I0917 00:26:18.464549       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:26:18.464553       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	I0917 00:26:28.460546       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:26:28.460585       1 main.go:301] handling current node
	I0917 00:26:28.460600       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:26:28.460605       1 main.go:324] Node ha-472903-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [96d46a46d90937e1dc254cbb641e1f12887151faabbe128f2cc51a8a833fe573] <==
	W0917 00:20:22.942097       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:20:22.942325       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0917 00:20:22.942394       1 watcher.go:335] watch chan error: etcdserver: no leader
	W0917 00:20:22.942710       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:20:22.942765       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:20:22.942837       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:20:22.942848       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:20:22.942890       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:20:22.942911       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:20:22.942786       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0917 00:20:22.942935       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:20:22.943141       1 watcher.go:335] watch chan error: etcdserver: no leader
	W0917 00:20:22.943456       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:20:22.943461       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:20:22.943528       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:20:22.943635       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:20:22.943837       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:20:22.943901       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:20:22.943915       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	{"level":"warn","ts":"2025-09-17T00:20:22.946894Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001ea85a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0917 00:20:22.946997       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	E0917 00:20:22.947072       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0917 00:20:22.948228       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0917 00:20:22.948272       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0917 00:20:22.949465       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.520117ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result=null
	
	
	==> kube-apiserver [cfbd1f241e75bbdd3b0d77d222013c1b9c372f8ee1eb0e32ec73e15ba81bca03] <==
	I0917 00:20:46.719376       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:20:46.727533       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0917 00:20:46.728312       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	W0917 00:20:46.734199       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I0917 00:20:46.736716       1 controller.go:667] quota admission added evaluator for: endpoints
	I0917 00:20:46.750980       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0917 00:20:46.753600       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0917 00:20:46.778596       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 00:20:46.798842       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 00:20:46.798883       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 00:20:46.799401       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 00:20:47.388338       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0917 00:20:47.607032       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 00:20:48.063383       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0917 00:20:50.113785       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:20:50.576091       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0917 00:21:03.487946       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0917 00:21:46.901727       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:21:53.802133       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:23:16.849432       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:23:19.753090       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:24:30.533343       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:24:39.745069       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:25:39.751921       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:25:51.983092       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [18287a3e85550b7db213c1e85a9b9a9076fb5a3b9d9c298b023fb4bef293d502] <==
	I0917 00:20:50.150027       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	E0917 00:21:10.061476       1 gc_controller.go:151] "Failed to get node" err="node \"ha-472903-m03\" not found" logger="pod-garbage-collector-controller" node="ha-472903-m03"
	E0917 00:21:10.061507       1 gc_controller.go:151] "Failed to get node" err="node \"ha-472903-m03\" not found" logger="pod-garbage-collector-controller" node="ha-472903-m03"
	E0917 00:21:10.061515       1 gc_controller.go:151] "Failed to get node" err="node \"ha-472903-m03\" not found" logger="pod-garbage-collector-controller" node="ha-472903-m03"
	E0917 00:21:10.061519       1 gc_controller.go:151] "Failed to get node" err="node \"ha-472903-m03\" not found" logger="pod-garbage-collector-controller" node="ha-472903-m03"
	E0917 00:21:10.061524       1 gc_controller.go:151] "Failed to get node" err="node \"ha-472903-m03\" not found" logger="pod-garbage-collector-controller" node="ha-472903-m03"
	E0917 00:21:30.061672       1 gc_controller.go:151] "Failed to get node" err="node \"ha-472903-m03\" not found" logger="pod-garbage-collector-controller" node="ha-472903-m03"
	E0917 00:21:30.061708       1 gc_controller.go:151] "Failed to get node" err="node \"ha-472903-m03\" not found" logger="pod-garbage-collector-controller" node="ha-472903-m03"
	E0917 00:21:30.061716       1 gc_controller.go:151] "Failed to get node" err="node \"ha-472903-m03\" not found" logger="pod-garbage-collector-controller" node="ha-472903-m03"
	E0917 00:21:30.061723       1 gc_controller.go:151] "Failed to get node" err="node \"ha-472903-m03\" not found" logger="pod-garbage-collector-controller" node="ha-472903-m03"
	E0917 00:21:30.061730       1 gc_controller.go:151] "Failed to get node" err="node \"ha-472903-m03\" not found" logger="pod-garbage-collector-controller" node="ha-472903-m03"
	I0917 00:21:30.072563       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-472903-m03"
	I0917 00:21:30.093758       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-472903-m03"
	I0917 00:21:30.093794       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-472903-m03"
	E0917 00:21:30.096645       1 gc_controller.go:256] "Unhandled Error" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62b1f237-95c3-4a40-b3b7-a519f7c80ad4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"},{\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-17T00:21:30Z\\\",\\\"message\\\":\\\"PodGC: node no longer exists\\\",\\\"observedGeneration\\\":2,\\\"reason\\\":\\\"DeletionByPodGC\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"observedGeneration\\\":2,\\\"phase\\\":\\\"Failed\\\"}}\" for pod \"kube-system\"/\"kube-vip-ha-472903-m03\": pods \"kube-vip-ha-472903-m03\" not found" logger="UnhandledError"
	I0917 00:21:30.096686       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-x6twd"
	E0917 00:21:30.118075       1 gc_controller.go:256] "Unhandled Error" err="pods \"kindnet-x6twd\" not found" logger="UnhandledError"
	I0917 00:21:30.118103       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-kn6nb"
	I0917 00:21:30.136545       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-kn6nb"
	I0917 00:21:30.136576       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-472903-m03"
	E0917 00:21:30.143817       1 gc_controller.go:256] "Unhandled Error" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73e10c6a-306a-4c7e-b816-1b8d6b815292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"},{\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-17T00:21:30Z\\\",\\\"message\\\":\\\"PodGC: node no longer exists\\\",\\\"observedGeneration\\\":2,\\\"reason\\\":\\\"DeletionByPodGC\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"observedGeneration\\\":2,\\\"phase\\\":\\\"Failed\\\"}}\" for pod \"kube-system\"/\"etcd-ha-472903-m03\": pods \"etcd-ha-472903-m03\" not found" logger="UnhandledError"
	I0917 00:21:30.143862       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-472903-m03"
	I0917 00:21:30.164614       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-472903-m03"
	I0917 00:21:30.164650       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-472903-m03"
	E0917 00:21:30.172689       1 gc_controller.go:256] "Unhandled Error" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b954c46-3b8c-47e7-b10c-a20dd936d45c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"},{\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-17T00:21:30Z\\\",\\\"message\\\":\\\"PodGC: node no longer exists\\\",\\\"observedGeneration\\\":2,\\\"reason\\\":\\\"DeletionByPodGC\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"observedGeneration\\\":2,\\\"phase\\\":\\\"Failed\\\"}}\" for pod \"kube-system\"/\"kube-scheduler-ha-472903-m03\": pods \"kube-scheduler-ha-472903-m03\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [c3f8ee22fca28b303f553c3003d1000b80565b4147ba719401c8c5f61921ee41] <==
	I0917 00:13:38.431764       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0917 00:13:38.431826       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:13:38.431860       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0917 00:13:38.431926       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0917 00:13:38.431992       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0917 00:13:38.432765       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0917 00:13:38.432816       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0917 00:13:38.432831       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:13:38.432867       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0917 00:13:38.432870       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 00:13:38.433430       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0917 00:13:38.433549       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 00:13:38.433648       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903"
	I0917 00:13:38.433689       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m02"
	I0917 00:13:38.433719       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472903-m03"
	I0917 00:13:38.433784       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 00:13:38.434607       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0917 00:13:38.436471       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0917 00:13:38.443120       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:13:38.447017       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	E0917 00:20:18.386639       1 gc_controller.go:151] "Failed to get node" err="node \"ha-472903-m03\" not found" logger="pod-garbage-collector-controller" node="ha-472903-m03"
	E0917 00:20:18.386770       1 gc_controller.go:151] "Failed to get node" err="node \"ha-472903-m03\" not found" logger="pod-garbage-collector-controller" node="ha-472903-m03"
	E0917 00:20:18.386931       1 gc_controller.go:151] "Failed to get node" err="node \"ha-472903-m03\" not found" logger="pod-garbage-collector-controller" node="ha-472903-m03"
	E0917 00:20:18.386963       1 gc_controller.go:151] "Failed to get node" err="node \"ha-472903-m03\" not found" logger="pod-garbage-collector-controller" node="ha-472903-m03"
	E0917 00:20:18.387012       1 gc_controller.go:151] "Failed to get node" err="node \"ha-472903-m03\" not found" logger="pod-garbage-collector-controller" node="ha-472903-m03"
	
	
	==> kube-proxy [940788d22241f65d76b7dd2c97e26a6d9f1ae996b18b10e34b25b96ac9486c2b] <==
	I0917 00:20:47.868285       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:20:47.947073       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0917 00:20:51.044757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-472903&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0917 00:20:52.448059       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:20:52.448085       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:20:52.448199       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:20:52.467787       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:20:52.467838       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:20:52.473128       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:20:52.473440       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:20:52.473470       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:20:52.474794       1 config.go:200] "Starting service config controller"
	I0917 00:20:52.474823       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:20:52.474835       1 config.go:309] "Starting node config controller"
	I0917 00:20:52.474850       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:20:52.475073       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:20:52.475091       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:20:52.475111       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:20:52.475118       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:20:52.575631       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:20:52.575734       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:20:52.575757       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:20:52.575757       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [b1c8344888d7deab1a3203bf9e16eefcb945905ec04b591acfb2fed3104948ec] <==
	I0917 00:13:36.733439       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:13:36.818219       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:13:36.918912       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:13:36.918966       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:13:36.919071       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:13:36.942838       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:13:36.942910       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:13:36.949958       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:13:36.950427       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:13:36.950467       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:13:36.954376       1 config.go:200] "Starting service config controller"
	I0917 00:13:36.954506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:13:36.954587       1 config.go:309] "Starting node config controller"
	I0917 00:13:36.954660       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:13:36.954669       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:13:36.954703       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:13:36.954712       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:13:36.954729       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:13:36.954736       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:13:37.054981       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:13:37.055026       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:13:37.055057       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9685cc588651ced2d51ab783a94533fff6a60971435eaa8e11982eb715ef5350] <==
	I0917 00:13:30.068882       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:13:35.071453       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:13:35.071492       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:13:35.090261       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:13:35.090310       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:13:35.090614       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:13:35.090722       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:13:35.090743       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:13:35.090760       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:13:35.094479       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0917 00:13:35.094536       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0917 00:13:35.190629       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:13:35.191303       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:13:35.194926       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0917 00:20:22.935351       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0917 00:20:22.935398       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0917 00:20:22.935451       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 00:20:22.935586       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I0917 00:20:22.935635       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:20:22.935664       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:20:22.935867       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0917 00:20:22.935905       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bb75d704c82d6d915e1e1d7be0255260facc80170d76a4979ea6d51293802131] <==
	I0917 00:20:43.013850       1 serving.go:386] Generated self-signed cert in-memory
	W0917 00:20:46.663703       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 00:20:46.663738       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 00:20:46.663750       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 00:20:46.663760       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 00:20:46.712537       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:20:46.712573       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:20:46.718352       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:20:46.718515       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:20:46.718535       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:20:46.718557       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:20:46.818894       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 17 00:20:46 ha-472903 kubelet[624]: E0917 00:20:46.747622     624 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-472903\" already exists" pod="kube-system/kube-scheduler-ha-472903"
	Sep 17 00:20:46 ha-472903 kubelet[624]: I0917 00:20:46.747658     624 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-472903"
	Sep 17 00:20:46 ha-472903 kubelet[624]: E0917 00:20:46.754392     624 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-vip-ha-472903\" already exists" pod="kube-system/kube-vip-ha-472903"
	Sep 17 00:20:46 ha-472903 kubelet[624]: I0917 00:20:46.754737     624 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-472903"
	Sep 17 00:20:46 ha-472903 kubelet[624]: I0917 00:20:46.756591     624 kubelet_node_status.go:124] "Node was previously registered" node="ha-472903"
	Sep 17 00:20:46 ha-472903 kubelet[624]: I0917 00:20:46.756682     624 kubelet_node_status.go:78] "Successfully registered node" node="ha-472903"
	Sep 17 00:20:46 ha-472903 kubelet[624]: I0917 00:20:46.756718     624 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 17 00:20:46 ha-472903 kubelet[624]: I0917 00:20:46.757912     624 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 17 00:20:46 ha-472903 kubelet[624]: E0917 00:20:46.761449     624 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-472903\" already exists" pod="kube-system/etcd-ha-472903"
	Sep 17 00:20:46 ha-472903 kubelet[624]: I0917 00:20:46.761552     624 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-472903"
	Sep 17 00:20:46 ha-472903 kubelet[624]: E0917 00:20:46.768274     624 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-472903\" already exists" pod="kube-system/kube-apiserver-ha-472903"
	Sep 17 00:20:47 ha-472903 kubelet[624]: I0917 00:20:47.208385     624 apiserver.go:52] "Watching apiserver"
	Sep 17 00:20:47 ha-472903 kubelet[624]: I0917 00:20:47.309572     624 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 17 00:20:47 ha-472903 kubelet[624]: I0917 00:20:47.382707     624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ac7f283e-4d28-46cf-a519-bd227237d5e7-tmp\") pod \"storage-provisioner\" (UID: \"ac7f283e-4d28-46cf-a519-bd227237d5e7\") " pod="kube-system/storage-provisioner"
	Sep 17 00:20:47 ha-472903 kubelet[624]: I0917 00:20:47.382753     624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1da43ca7-9af7-4573-9cdc-fd21b098ca2c-cni-cfg\") pod \"kindnet-lh7dv\" (UID: \"1da43ca7-9af7-4573-9cdc-fd21b098ca2c\") " pod="kube-system/kindnet-lh7dv"
	Sep 17 00:20:47 ha-472903 kubelet[624]: I0917 00:20:47.382946     624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1da43ca7-9af7-4573-9cdc-fd21b098ca2c-xtables-lock\") pod \"kindnet-lh7dv\" (UID: \"1da43ca7-9af7-4573-9cdc-fd21b098ca2c\") " pod="kube-system/kindnet-lh7dv"
	Sep 17 00:20:47 ha-472903 kubelet[624]: I0917 00:20:47.382990     624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4a70eec-48a7-4ea6-871a-1b5ed2beca9a-lib-modules\") pod \"kube-proxy-d4m8f\" (UID: \"d4a70eec-48a7-4ea6-871a-1b5ed2beca9a\") " pod="kube-system/kube-proxy-d4m8f"
	Sep 17 00:20:47 ha-472903 kubelet[624]: I0917 00:20:47.383083     624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4a70eec-48a7-4ea6-871a-1b5ed2beca9a-xtables-lock\") pod \"kube-proxy-d4m8f\" (UID: \"d4a70eec-48a7-4ea6-871a-1b5ed2beca9a\") " pod="kube-system/kube-proxy-d4m8f"
	Sep 17 00:20:47 ha-472903 kubelet[624]: I0917 00:20:47.383174     624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1da43ca7-9af7-4573-9cdc-fd21b098ca2c-lib-modules\") pod \"kindnet-lh7dv\" (UID: \"1da43ca7-9af7-4573-9cdc-fd21b098ca2c\") " pod="kube-system/kindnet-lh7dv"
	Sep 17 00:20:52 ha-472903 kubelet[624]: I0917 00:20:52.653469     624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 17 00:20:55 ha-472903 kubelet[624]: I0917 00:20:55.180621     624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 17 00:21:18 ha-472903 kubelet[624]: I0917 00:21:18.396489     624 scope.go:117] "RemoveContainer" containerID="c8a737e1be33c6e4b6e17f5359483d22d3eeb7ca2497546109c1097eb9343a7f"
	Sep 17 00:21:18 ha-472903 kubelet[624]: I0917 00:21:18.396796     624 scope.go:117] "RemoveContainer" containerID="1bbd0e4ad154dd22e23a6b6b466995ac2dbd1dd27ce2f2d3678d730222c98a17"
	Sep 17 00:21:18 ha-472903 kubelet[624]: E0917 00:21:18.396975     624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ac7f283e-4d28-46cf-a519-bd227237d5e7)\"" pod="kube-system/storage-provisioner" podUID="ac7f283e-4d28-46cf-a519-bd227237d5e7"
	Sep 17 00:21:32 ha-472903 kubelet[624]: I0917 00:21:32.231272     624 scope.go:117] "RemoveContainer" containerID="1bbd0e4ad154dd22e23a6b6b466995ac2dbd1dd27ce2f2d3678d730222c98a17"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-472903 -n ha-472903
helpers_test.go:269: (dbg) Run:  kubectl --context ha-472903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-wkqz5
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartCluster]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-472903 describe pod busybox-7b57f96db7-wkqz5
helpers_test.go:290: (dbg) kubectl --context ha-472903 describe pod busybox-7b57f96db7-wkqz5:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-wkqz5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bvn6l (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-bvn6l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  6m32s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  6m32s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  6m32s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  6m32s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  6m32s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  6m32s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  5m45s                default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  45s                  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  46s (x2 over 5m46s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (357.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (63.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-099552 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E0917 00:49:29.519300  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/auto-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-099552 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: exit status 90 (1m2.123417862s)

                                                
                                                
-- stdout --
	* [old-k8s-version-099552] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-099552" primary control-plane node in "old-k8s-version-099552" cluster
	* Pulling base image v0.0.48 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:49:26.247715 1094183 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:49:26.248045 1094183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:49:26.248056 1094183 out.go:374] Setting ErrFile to fd 2...
	I0917 00:49:26.248063 1094183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:49:26.248387 1094183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:49:26.248948 1094183 out.go:368] Setting JSON to false
	I0917 00:49:26.250458 1094183 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":12708,"bootTime":1758057458,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:49:26.250532 1094183 start.go:140] virtualization: kvm guest
	I0917 00:49:26.252131 1094183 out.go:179] * [old-k8s-version-099552] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:49:26.253542 1094183 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:49:26.253564 1094183 notify.go:220] Checking for updates...
	I0917 00:49:26.256107 1094183 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:49:26.257118 1094183 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:49:26.260858 1094183 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0917 00:49:26.261791 1094183 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:49:26.262737 1094183 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:49:26.264170 1094183 config.go:182] Loaded profile config "old-k8s-version-099552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I0917 00:49:26.265827 1094183 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0917 00:49:26.266805 1094183 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:49:26.296783 1094183 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:49:26.296906 1094183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:49:26.377005 1094183 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-17 00:49:26.361786188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:49:26.377162 1094183 docker.go:318] overlay module found
	I0917 00:49:26.378783 1094183 out.go:179] * Using the docker driver based on existing profile
	I0917 00:49:26.379737 1094183 start.go:304] selected driver: docker
	I0917 00:49:26.379755 1094183 start.go:918] validating driver "docker" against &{Name:old-k8s-version-099552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-099552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:49:26.379866 1094183 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:49:26.380640 1094183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:49:26.449822 1094183 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-17 00:49:26.438284361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:49:26.450210 1094183 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:49:26.450266 1094183 cni.go:84] Creating CNI manager for ""
	I0917 00:49:26.450339 1094183 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0917 00:49:26.450398 1094183 start.go:348] cluster config:
	{Name:old-k8s-version-099552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-099552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contai
nerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:49:26.451991 1094183 out.go:179] * Starting "old-k8s-version-099552" primary control-plane node in "old-k8s-version-099552" cluster
	I0917 00:49:26.452884 1094183 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:49:26.453872 1094183 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:49:26.454741 1094183 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0917 00:49:26.454781 1094183 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I0917 00:49:26.454796 1094183 cache.go:58] Caching tarball of preloaded images
	I0917 00:49:26.454870 1094183 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:49:26.454929 1094183 preload.go:172] Found /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:49:26.454951 1094183 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I0917 00:49:26.455052 1094183 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/old-k8s-version-099552/config.json ...
	I0917 00:49:26.481610 1094183 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:49:26.481636 1094183 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:49:26.481654 1094183 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:49:26.481682 1094183 start.go:360] acquireMachinesLock for old-k8s-version-099552: {Name:mk1dfeb7c690a870e027e1744db1640770d934c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:49:26.481755 1094183 start.go:364] duration metric: took 46.034µs to acquireMachinesLock for "old-k8s-version-099552"
	I0917 00:49:26.481779 1094183 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:49:26.481789 1094183 fix.go:54] fixHost starting: 
	I0917 00:49:26.482102 1094183 cli_runner.go:164] Run: docker container inspect old-k8s-version-099552 --format={{.State.Status}}
	I0917 00:49:26.507018 1094183 fix.go:112] recreateIfNeeded on old-k8s-version-099552: state=Stopped err=<nil>
	W0917 00:49:26.507050 1094183 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:49:26.508922 1094183 out.go:252] * Restarting existing docker container for "old-k8s-version-099552" ...
	I0917 00:49:26.508999 1094183 cli_runner.go:164] Run: docker start old-k8s-version-099552
	I0917 00:49:26.805484 1094183 cli_runner.go:164] Run: docker container inspect old-k8s-version-099552 --format={{.State.Status}}
	I0917 00:49:26.830799 1094183 kic.go:430] container "old-k8s-version-099552" state is running.
	I0917 00:49:26.831319 1094183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-099552
	I0917 00:49:26.853667 1094183 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/old-k8s-version-099552/config.json ...
	I0917 00:49:26.853984 1094183 machine.go:93] provisionDockerMachine start ...
	I0917 00:49:26.854093 1094183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-099552
	I0917 00:49:26.876957 1094183 main.go:141] libmachine: Using SSH client type: native
	I0917 00:49:26.877177 1094183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33884 <nil> <nil>}
	I0917 00:49:26.877184 1094183 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:49:26.877876 1094183 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52472->127.0.0.1:33884: read: connection reset by peer
	I0917 00:49:30.023542 1094183 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-099552
	
	I0917 00:49:30.023580 1094183 ubuntu.go:182] provisioning hostname "old-k8s-version-099552"
	I0917 00:49:30.023652 1094183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-099552
	I0917 00:49:30.053010 1094183 main.go:141] libmachine: Using SSH client type: native
	I0917 00:49:30.053336 1094183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33884 <nil> <nil>}
	I0917 00:49:30.053361 1094183 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-099552 && echo "old-k8s-version-099552" | sudo tee /etc/hostname
	I0917 00:49:30.210224 1094183 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-099552
	
	I0917 00:49:30.210324 1094183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-099552
	I0917 00:49:30.229925 1094183 main.go:141] libmachine: Using SSH client type: native
	I0917 00:49:30.230242 1094183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33884 <nil> <nil>}
	I0917 00:49:30.230268 1094183 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-099552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-099552/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-099552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:49:30.378279 1094183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:49:30.378317 1094183 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:49:30.378341 1094183 ubuntu.go:190] setting up certificates
	I0917 00:49:30.378353 1094183 provision.go:84] configureAuth start
	I0917 00:49:30.378409 1094183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-099552
	I0917 00:49:30.396119 1094183 provision.go:143] copyHostCerts
	I0917 00:49:30.396176 1094183 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:49:30.396200 1094183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:49:30.396296 1094183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:49:30.396469 1094183 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:49:30.396484 1094183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:49:30.396533 1094183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:49:30.396626 1094183 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:49:30.396636 1094183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:49:30.396683 1094183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:49:30.396777 1094183 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-099552 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-099552]
	I0917 00:49:31.449126 1094183 provision.go:177] copyRemoteCerts
	I0917 00:49:31.449208 1094183 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:49:31.449253 1094183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-099552
	I0917 00:49:31.482894 1094183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33884 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/old-k8s-version-099552/id_rsa Username:docker}
	I0917 00:49:31.587954 1094183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0917 00:49:31.617948 1094183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:49:31.644802 1094183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:49:31.670059 1094183 provision.go:87] duration metric: took 1.291694764s to configureAuth
	I0917 00:49:31.670086 1094183 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:49:31.670288 1094183 config.go:182] Loaded profile config "old-k8s-version-099552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I0917 00:49:31.670302 1094183 machine.go:96] duration metric: took 4.816300849s to provisionDockerMachine
	I0917 00:49:31.670311 1094183 start.go:293] postStartSetup for "old-k8s-version-099552" (driver="docker")
	I0917 00:49:31.670327 1094183 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:49:31.670383 1094183 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:49:31.670456 1094183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-099552
	I0917 00:49:31.688025 1094183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33884 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/old-k8s-version-099552/id_rsa Username:docker}
	I0917 00:49:31.789670 1094183 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:49:31.793755 1094183 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:49:31.793795 1094183 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:49:31.793806 1094183 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:49:31.793847 1094183 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:49:31.793864 1094183 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:49:31.793938 1094183 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:49:31.794044 1094183 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:49:31.794162 1094183 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:49:31.805820 1094183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:49:31.838790 1094183 start.go:296] duration metric: took 168.46ms for postStartSetup
	I0917 00:49:31.838893 1094183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:49:31.839012 1094183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-099552
	I0917 00:49:31.859254 1094183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33884 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/old-k8s-version-099552/id_rsa Username:docker}
	I0917 00:49:31.955500 1094183 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:49:31.959859 1094183 fix.go:56] duration metric: took 5.478061571s for fixHost
	I0917 00:49:31.959884 1094183 start.go:83] releasing machines lock for "old-k8s-version-099552", held for 5.478113895s
	I0917 00:49:31.959948 1094183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-099552
	I0917 00:49:31.978349 1094183 ssh_runner.go:195] Run: cat /version.json
	I0917 00:49:31.978410 1094183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-099552
	I0917 00:49:31.978457 1094183 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:49:31.978539 1094183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-099552
	I0917 00:49:31.995715 1094183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33884 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/old-k8s-version-099552/id_rsa Username:docker}
	I0917 00:49:31.996098 1094183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33884 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/old-k8s-version-099552/id_rsa Username:docker}
	I0917 00:49:32.171137 1094183 ssh_runner.go:195] Run: systemctl --version
	I0917 00:49:32.176128 1094183 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:49:32.180614 1094183 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:49:32.199065 1094183 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:49:32.199110 1094183 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:49:32.207908 1094183 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:49:32.207930 1094183 start.go:495] detecting cgroup driver to use...
	I0917 00:49:32.207961 1094183 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:49:32.207996 1094183 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:49:32.221326 1094183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:49:32.232535 1094183 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:49:32.232574 1094183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:49:32.245345 1094183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:49:32.255955 1094183 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:49:32.318999 1094183 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:49:32.386657 1094183 docker.go:234] disabling docker service ...
	I0917 00:49:32.386726 1094183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:49:32.398590 1094183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:49:32.409407 1094183 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:49:32.477654 1094183 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:49:32.542513 1094183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:49:32.553662 1094183 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:49:32.570880 1094183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0917 00:49:32.580731 1094183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:49:32.590240 1094183 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:49:32.590282 1094183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:49:32.599990 1094183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:49:32.609220 1094183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:49:32.619018 1094183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:49:32.628402 1094183 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:49:32.637096 1094183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:49:32.646438 1094183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:49:32.656309 1094183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:49:32.665839 1094183 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:49:32.673875 1094183 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:49:32.682212 1094183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:49:32.746047 1094183 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:49:32.821289 1094183 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:49:32.821362 1094183 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:49:32.825371 1094183 start.go:563] Will wait 60s for crictl version
	I0917 00:49:32.825428 1094183 ssh_runner.go:195] Run: which crictl
	I0917 00:49:32.828846 1094183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:49:32.862717 1094183 retry.go:31] will retry after 8.824774793s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:49:32.860217     487 remote_runtime.go:189] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	time="2025-09-17T00:49:32Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0917 00:49:41.687655 1094183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:49:41.724907 1094183 retry.go:31] will retry after 16.782976461s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:49:41.721985     498 remote_runtime.go:189] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	time="2025-09-17T00:49:41Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0917 00:49:58.508547 1094183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:49:58.553215 1094183 retry.go:31] will retry after 12.137832762s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:49:58.550604     509 remote_runtime.go:189] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	time="2025-09-17T00:49:58Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0917 00:50:10.695055 1094183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:50:10.729919 1094183 retry.go:31] will retry after 17.540135534s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:10.727358     521 remote_runtime.go:189] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	time="2025-09-17T00:50:10Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0917 00:50:28.271397 1094183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:50:28.306326 1094183 out.go:203] 
	W0917 00:50:28.307357 1094183 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:28.301888     533 remote_runtime.go:189] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	time="2025-09-17T00:50:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:28.301888     533 remote_runtime.go:189] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	time="2025-09-17T00:50:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0917 00:50:28.307374 1094183 out.go:285] * 
	* 
	W0917 00:50:28.309097 1094183 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:50:28.310228 1094183 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-099552 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0": exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-099552
helpers_test.go:243: (dbg) docker inspect old-k8s-version-099552:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606",
	        "Created": "2025-09-17T00:47:14.452618877Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1094443,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:49:26.538560853Z",
	            "FinishedAt": "2025-09-17T00:49:25.518757767Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/hosts",
	        "LogPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606-json.log",
	        "Name": "/old-k8s-version-099552",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-099552:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-099552",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606",
	                "LowerDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-099552",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-099552/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-099552",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-099552",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-099552",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2ddde64c24f24e3766a57841a672431afff6e67b8b55455f7a18ce1a12566fcb",
	            "SandboxKey": "/var/run/docker/netns/2ddde64c24f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33884"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33885"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33888"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33886"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33887"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-099552": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:0f:62:a1:13:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c084050a20a9e46a211e9f023f9558fec9400a691d73a4266e29ff60000fdc12",
	                    "EndpointID": "d69770401db68668497e8a9ddef6c93a77673f95c78cb11a18c567c6244c9d3d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-099552",
	                        "dc5a23440120"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-099552 -n old-k8s-version-099552
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-099552 -n old-k8s-version-099552: exit status 6 (291.108972ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:28.611913 1106210 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-099552" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-099552 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable metrics-server -p old-k8s-version-099552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ stop    │ -p old-k8s-version-099552 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable metrics-server -p newest-cni-895748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-011954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p default-k8s-diff-port-011954 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:50 UTC │
	│ stop    │ -p newest-cni-895748 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p newest-cni-895748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p newest-cni-895748 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-099552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p old-k8s-version-099552 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │                     │
	│ image   │ newest-cni-895748 image list --format=json                                                                                                                                                                                                          │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ pause   │ -p newest-cni-895748 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ unpause │ -p newest-cni-895748 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p newest-cni-895748                                                                                                                                                                                                                                │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p newest-cni-895748                                                                                                                                                                                                                                │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p disable-driver-mounts-908870                                                                                                                                                                                                                     │ disable-driver-mounts-908870 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p embed-certs-656365 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-656365           │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-305343 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ stop    │ -p no-preload-305343 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ image   │ default-k8s-diff-port-011954 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ pause   │ -p default-k8s-diff-port-011954 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ unpause │ -p default-k8s-diff-port-011954 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ delete  │ -p default-k8s-diff-port-011954                                                                                                                                                                                                                     │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ addons  │ enable dashboard -p no-preload-305343 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ start   │ -p no-preload-305343 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:50:26
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:50:26.600102 1105542 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:50:26.600372 1105542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:50:26.600383 1105542 out.go:374] Setting ErrFile to fd 2...
	I0917 00:50:26.600387 1105542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:50:26.600688 1105542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:50:26.601270 1105542 out.go:368] Setting JSON to false
	I0917 00:50:26.602722 1105542 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":12769,"bootTime":1758057458,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:50:26.602857 1105542 start.go:140] virtualization: kvm guest
	I0917 00:50:26.604653 1105542 out.go:179] * [no-preload-305343] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:50:26.605852 1105542 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:50:26.605872 1105542 notify.go:220] Checking for updates...
	I0917 00:50:26.607972 1105542 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:50:26.609198 1105542 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:50:26.610268 1105542 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0917 00:50:26.611322 1105542 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:50:26.612361 1105542 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:50:26.613681 1105542 config.go:182] Loaded profile config "no-preload-305343": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:50:26.614196 1105542 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:50:26.637278 1105542 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:50:26.637376 1105542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:50:26.690682 1105542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:50:26.681595287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:50:26.690839 1105542 docker.go:318] overlay module found
	I0917 00:50:26.692660 1105542 out.go:179] * Using the docker driver based on existing profile
	I0917 00:50:26.693532 1105542 start.go:304] selected driver: docker
	I0917 00:50:26.693548 1105542 start.go:918] validating driver "docker" against &{Name:no-preload-305343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:50:26.693646 1105542 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:50:26.694360 1105542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:50:26.747230 1105542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:50:26.73681015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:50:26.747565 1105542 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:50:26.747603 1105542 cni.go:84] Creating CNI manager for ""
	I0917 00:50:26.747674 1105542 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0917 00:50:26.747730 1105542 start.go:348] cluster config:
	{Name:no-preload-305343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:50:26.749227 1105542 out.go:179] * Starting "no-preload-305343" primary control-plane node in "no-preload-305343" cluster
	I0917 00:50:26.750402 1105542 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:50:26.751455 1105542 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:50:26.752350 1105542 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:50:26.752446 1105542 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:50:26.752497 1105542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/config.json ...
	I0917 00:50:26.752648 1105542 cache.go:107] acquiring lock: {Name:mk4909779fb0f5743ddfc059d2d0162861e84f07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752675 1105542 cache.go:107] acquiring lock: {Name:mk6df29775b0b58d1ac8dea5ffe905dd7aa0e789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752655 1105542 cache.go:107] acquiring lock: {Name:mkddf7ba64ef1815649c8a0d31e1ab341ed655cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752697 1105542 cache.go:107] acquiring lock: {Name:mk97f69e3cbe6c234e4de1197be2229ef06ba13f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752737 1105542 cache.go:107] acquiring lock: {Name:mkaa0f91c4e98db3393f92864e13e9189082e595 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752794 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0917 00:50:26.752805 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0917 00:50:26.752810 1105542 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0" took 171.127µs
	I0917 00:50:26.752837 1105542 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0917 00:50:26.752817 1105542 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 180.429µs
	I0917 00:50:26.752848 1105542 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0917 00:50:26.752844 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0917 00:50:26.752830 1105542 cache.go:107] acquiring lock: {Name:mkf4cb04c1071ecafdc32f1a85d4e090a7c4807c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752860 1105542 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0" took 124.82µs
	I0917 00:50:26.752867 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I0917 00:50:26.752870 1105542 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0917 00:50:26.752850 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0917 00:50:26.752876 1105542 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 206.283µs
	I0917 00:50:26.752885 1105542 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0917 00:50:26.752884 1105542 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0" took 209.597µs
	I0917 00:50:26.752892 1105542 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0917 00:50:26.752943 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I0917 00:50:26.752908 1105542 cache.go:107] acquiring lock: {Name:mkc74a1cd2fbc63086196dff6872225ceed330b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752953 1105542 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 170.837µs
	I0917 00:50:26.752932 1105542 cache.go:107] acquiring lock: {Name:mka95aa97be9e772922157c335bf881cd020f83a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752963 1105542 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I0917 00:50:26.753088 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0917 00:50:26.753104 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0917 00:50:26.753103 1105542 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 260.35µs
	I0917 00:50:26.753116 1105542 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0917 00:50:26.753114 1105542 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0" took 233.147µs
	I0917 00:50:26.753124 1105542 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0917 00:50:26.753139 1105542 cache.go:87] Successfully saved all images to host disk.
	I0917 00:50:26.772809 1105542 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:50:26.772828 1105542 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:50:26.772844 1105542 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:50:26.772868 1105542 start.go:360] acquireMachinesLock for no-preload-305343: {Name:mk301cc5652bfe73a264aaf61a48b9167df412f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.772918 1105542 start.go:364] duration metric: took 33.839µs to acquireMachinesLock for "no-preload-305343"
	I0917 00:50:26.772939 1105542 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:50:26.772949 1105542 fix.go:54] fixHost starting: 
	I0917 00:50:26.773161 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:26.790526 1105542 fix.go:112] recreateIfNeeded on no-preload-305343: state=Stopped err=<nil>
	W0917 00:50:26.790551 1105542 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:50:23.500084 1099270 system_pods.go:86] 8 kube-system pods found
	I0917 00:50:23.500124 1099270 system_pods.go:89] "coredns-66bc5c9577-l6kf4" [3471ccde-7a2f-40db-9b89-0b0b1d99d708] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:50:23.500134 1099270 system_pods.go:89] "etcd-embed-certs-656365" [7019686f-c4e9-4229-96b9-b0b736f1ff1f] Running
	I0917 00:50:23.500142 1099270 system_pods.go:89] "kindnet-82pzc" [98599a74-dfca-4aaa-8c5f-f5b5fa514aea] Running
	I0917 00:50:23.500148 1099270 system_pods.go:89] "kube-apiserver-embed-certs-656365" [4474c0bb-d251-4a3f-9617-05519f6f36e1] Running
	I0917 00:50:23.500156 1099270 system_pods.go:89] "kube-controller-manager-embed-certs-656365" [9d8ea73e-c495-4fc0-8e50-d21f3bbd7bf7] Running
	I0917 00:50:23.500163 1099270 system_pods.go:89] "kube-proxy-h2lgd" [ec2eebd5-b4b3-41cf-af3b-efda8464fe22] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:50:23.500173 1099270 system_pods.go:89] "kube-scheduler-embed-certs-656365" [74679934-eb97-474d-99a2-28c356aa74b4] Running
	I0917 00:50:23.500181 1099270 system_pods.go:89] "storage-provisioner" [124939ce-5cb9-41fa-b555-2b807af05792] Running
	I0917 00:50:23.500205 1099270 retry.go:31] will retry after 4.483504149s: missing components: kube-dns
	I0917 00:50:28.271397 1094183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:50:28.306326 1094183 out.go:203] 
	W0917 00:50:28.307357 1094183 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:28.301888     533 remote_runtime.go:189] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	time="2025-09-17T00:50:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0917 00:50:28.307374 1094183 out.go:285] * 
	W0917 00:50:28.309097 1094183 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:50:28.310228 1094183 out.go:203] 
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:29.199551     672 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:29Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> containerd <==
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817851916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817861430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817871180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817881996Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817897602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817915140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817935663Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818007576Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818024724Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818033043Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818056520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818070483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818081010Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818090259Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818098205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818326501Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRu
ntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.9 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissi
ngHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:true EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818370040Z" level=info msg="Connect containerd service"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818405590Z" level=info msg="using legacy CRI server"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818449224Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818553850Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818992386Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="failed to create CRI service: failed to create cni conf monitor for default: failed to create fsnotify watcher: too many open files"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819183751Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819229690Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819274263Z" level=info msg="containerd successfully booted in 0.026038s"
	Sep 17 00:49:32 old-k8s-version-099552 systemd[1]: Started containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.397971] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.103886] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.397468] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.988018] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.115808] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396948] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.104485] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396293] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.105124] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396148] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.500649] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.569526] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 5f ef 1c 19 8f 08 06
	[ +14.523051] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 52 1e b8 75 d2 08 06
	[  +0.000432] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ba 5e 90 6a a2 f3 08 06
	[  +7.560975] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a d4 fe 64 89 0f 08 06
	[  +0.000660] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 50 38 7f fb 72 08 06
	[Sep17 00:48] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea ab 9c df 51 6c 08 06
	[  +0.000561] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 5f ef 1c 19 8f 08 06
	
	
	==> kernel <==
	 00:50:29 up  3:32,  0 users,  load average: 2.92, 2.90, 2.17
	Linux old-k8s-version-099552 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:28.896538 1106321 logs.go:279] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:28.893692     561 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:28Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:28.929944 1106321 logs.go:279] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:28.927486     573 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:28Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:28.964670 1106321 logs.go:279] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:28.962064     585 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:28Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:28.998669 1106321 logs.go:279] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:28.995771     597 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:28Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:29.032947 1106321 logs.go:279] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:29.030443     609 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:29Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:29.066421 1106321 logs.go:279] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:29.063958     621 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:29Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:29.098390 1106321 logs.go:279] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:29.096178     633 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:29Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:29.131347 1106321 logs.go:279] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:29.128887     645 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:29Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:29.164312 1106321 logs.go:279] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:29.161888     657 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:29Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"

                                                
                                                
** /stderr **
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-099552 -n old-k8s-version-099552
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-099552 -n old-k8s-version-099552: exit status 6 (273.505842ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:29.619593 1106582 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-099552" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "old-k8s-version-099552" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (63.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-099552" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-099552
helpers_test.go:243: (dbg) docker inspect old-k8s-version-099552:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606",
	        "Created": "2025-09-17T00:47:14.452618877Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1094443,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:49:26.538560853Z",
	            "FinishedAt": "2025-09-17T00:49:25.518757767Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/hosts",
	        "LogPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606-json.log",
	        "Name": "/old-k8s-version-099552",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-099552:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-099552",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606",
	                "LowerDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-099552",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-099552/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-099552",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-099552",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-099552",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2ddde64c24f24e3766a57841a672431afff6e67b8b55455f7a18ce1a12566fcb",
	            "SandboxKey": "/var/run/docker/netns/2ddde64c24f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33884"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33885"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33888"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33886"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33887"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-099552": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:0f:62:a1:13:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c084050a20a9e46a211e9f023f9558fec9400a691d73a4266e29ff60000fdc12",
	                    "EndpointID": "d69770401db68668497e8a9ddef6c93a77673f95c78cb11a18c567c6244c9d3d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-099552",
	                        "dc5a23440120"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-099552 -n old-k8s-version-099552
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-099552 -n old-k8s-version-099552: exit status 6 (273.572658ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:29.910892 1106700 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-099552" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-099552 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable metrics-server -p old-k8s-version-099552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ stop    │ -p old-k8s-version-099552 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable metrics-server -p newest-cni-895748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-011954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p default-k8s-diff-port-011954 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:50 UTC │
	│ stop    │ -p newest-cni-895748 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p newest-cni-895748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p newest-cni-895748 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-099552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p old-k8s-version-099552 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │                     │
	│ image   │ newest-cni-895748 image list --format=json                                                                                                                                                                                                          │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ pause   │ -p newest-cni-895748 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ unpause │ -p newest-cni-895748 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p newest-cni-895748                                                                                                                                                                                                                                │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p newest-cni-895748                                                                                                                                                                                                                                │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p disable-driver-mounts-908870                                                                                                                                                                                                                     │ disable-driver-mounts-908870 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p embed-certs-656365 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-656365           │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-305343 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ stop    │ -p no-preload-305343 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ image   │ default-k8s-diff-port-011954 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ pause   │ -p default-k8s-diff-port-011954 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ unpause │ -p default-k8s-diff-port-011954 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ delete  │ -p default-k8s-diff-port-011954                                                                                                                                                                                                                     │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ addons  │ enable dashboard -p no-preload-305343 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ start   │ -p no-preload-305343 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:50:26
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:50:26.600102 1105542 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:50:26.600372 1105542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:50:26.600383 1105542 out.go:374] Setting ErrFile to fd 2...
	I0917 00:50:26.600387 1105542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:50:26.600688 1105542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:50:26.601270 1105542 out.go:368] Setting JSON to false
	I0917 00:50:26.602722 1105542 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":12769,"bootTime":1758057458,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:50:26.602857 1105542 start.go:140] virtualization: kvm guest
	I0917 00:50:26.604653 1105542 out.go:179] * [no-preload-305343] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:50:26.605852 1105542 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:50:26.605872 1105542 notify.go:220] Checking for updates...
	I0917 00:50:26.607972 1105542 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:50:26.609198 1105542 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:50:26.610268 1105542 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0917 00:50:26.611322 1105542 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:50:26.612361 1105542 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:50:26.613681 1105542 config.go:182] Loaded profile config "no-preload-305343": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:50:26.614196 1105542 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:50:26.637278 1105542 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:50:26.637376 1105542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:50:26.690682 1105542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:50:26.681595287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:50:26.690839 1105542 docker.go:318] overlay module found
	I0917 00:50:26.692660 1105542 out.go:179] * Using the docker driver based on existing profile
	I0917 00:50:26.693532 1105542 start.go:304] selected driver: docker
	I0917 00:50:26.693548 1105542 start.go:918] validating driver "docker" against &{Name:no-preload-305343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:50:26.693646 1105542 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:50:26.694360 1105542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:50:26.747230 1105542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:50:26.73681015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:50:26.747565 1105542 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:50:26.747603 1105542 cni.go:84] Creating CNI manager for ""
	I0917 00:50:26.747674 1105542 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0917 00:50:26.747730 1105542 start.go:348] cluster config:
	{Name:no-preload-305343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:50:26.749227 1105542 out.go:179] * Starting "no-preload-305343" primary control-plane node in "no-preload-305343" cluster
	I0917 00:50:26.750402 1105542 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:50:26.751455 1105542 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:50:26.752350 1105542 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:50:26.752446 1105542 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:50:26.752497 1105542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/config.json ...
	I0917 00:50:26.752648 1105542 cache.go:107] acquiring lock: {Name:mk4909779fb0f5743ddfc059d2d0162861e84f07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752675 1105542 cache.go:107] acquiring lock: {Name:mk6df29775b0b58d1ac8dea5ffe905dd7aa0e789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752655 1105542 cache.go:107] acquiring lock: {Name:mkddf7ba64ef1815649c8a0d31e1ab341ed655cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752697 1105542 cache.go:107] acquiring lock: {Name:mk97f69e3cbe6c234e4de1197be2229ef06ba13f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752737 1105542 cache.go:107] acquiring lock: {Name:mkaa0f91c4e98db3393f92864e13e9189082e595 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752794 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0917 00:50:26.752805 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0917 00:50:26.752810 1105542 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0" took 171.127µs
	I0917 00:50:26.752837 1105542 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0917 00:50:26.752817 1105542 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 180.429µs
	I0917 00:50:26.752848 1105542 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0917 00:50:26.752844 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0917 00:50:26.752830 1105542 cache.go:107] acquiring lock: {Name:mkf4cb04c1071ecafdc32f1a85d4e090a7c4807c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752860 1105542 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0" took 124.82µs
	I0917 00:50:26.752867 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I0917 00:50:26.752870 1105542 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0917 00:50:26.752850 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0917 00:50:26.752876 1105542 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 206.283µs
	I0917 00:50:26.752885 1105542 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0917 00:50:26.752884 1105542 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0" took 209.597µs
	I0917 00:50:26.752892 1105542 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0917 00:50:26.752943 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I0917 00:50:26.752908 1105542 cache.go:107] acquiring lock: {Name:mkc74a1cd2fbc63086196dff6872225ceed330b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752953 1105542 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 170.837µs
	I0917 00:50:26.752932 1105542 cache.go:107] acquiring lock: {Name:mka95aa97be9e772922157c335bf881cd020f83a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752963 1105542 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I0917 00:50:26.753088 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0917 00:50:26.753104 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0917 00:50:26.753103 1105542 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 260.35µs
	I0917 00:50:26.753116 1105542 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0917 00:50:26.753114 1105542 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0" took 233.147µs
	I0917 00:50:26.753124 1105542 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0917 00:50:26.753139 1105542 cache.go:87] Successfully saved all images to host disk.
	I0917 00:50:26.772809 1105542 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:50:26.772828 1105542 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:50:26.772844 1105542 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:50:26.772868 1105542 start.go:360] acquireMachinesLock for no-preload-305343: {Name:mk301cc5652bfe73a264aaf61a48b9167df412f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.772918 1105542 start.go:364] duration metric: took 33.839µs to acquireMachinesLock for "no-preload-305343"
	I0917 00:50:26.772939 1105542 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:50:26.772949 1105542 fix.go:54] fixHost starting: 
	I0917 00:50:26.773161 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:26.790526 1105542 fix.go:112] recreateIfNeeded on no-preload-305343: state=Stopped err=<nil>
	W0917 00:50:26.790551 1105542 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:50:23.500084 1099270 system_pods.go:86] 8 kube-system pods found
	I0917 00:50:23.500124 1099270 system_pods.go:89] "coredns-66bc5c9577-l6kf4" [3471ccde-7a2f-40db-9b89-0b0b1d99d708] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:50:23.500134 1099270 system_pods.go:89] "etcd-embed-certs-656365" [7019686f-c4e9-4229-96b9-b0b736f1ff1f] Running
	I0917 00:50:23.500142 1099270 system_pods.go:89] "kindnet-82pzc" [98599a74-dfca-4aaa-8c5f-f5b5fa514aea] Running
	I0917 00:50:23.500148 1099270 system_pods.go:89] "kube-apiserver-embed-certs-656365" [4474c0bb-d251-4a3f-9617-05519f6f36e1] Running
	I0917 00:50:23.500156 1099270 system_pods.go:89] "kube-controller-manager-embed-certs-656365" [9d8ea73e-c495-4fc0-8e50-d21f3bbd7bf7] Running
	I0917 00:50:23.500163 1099270 system_pods.go:89] "kube-proxy-h2lgd" [ec2eebd5-b4b3-41cf-af3b-efda8464fe22] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:50:23.500173 1099270 system_pods.go:89] "kube-scheduler-embed-certs-656365" [74679934-eb97-474d-99a2-28c356aa74b4] Running
	I0917 00:50:23.500181 1099270 system_pods.go:89] "storage-provisioner" [124939ce-5cb9-41fa-b555-2b807af05792] Running
	I0917 00:50:23.500205 1099270 retry.go:31] will retry after 4.483504149s: missing components: kube-dns
	I0917 00:50:28.271397 1094183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:50:28.306326 1094183 out.go:203] 
	W0917 00:50:28.307357 1094183 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:28.301888     533 remote_runtime.go:189] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	time="2025-09-17T00:50:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0917 00:50:28.307374 1094183 out.go:285] * 
	W0917 00:50:28.309097 1094183 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:50:28.310228 1094183 out.go:203] 
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:30.513221     853 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:30Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> containerd <==
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817851916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817861430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817871180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817881996Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817897602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817915140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817935663Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818007576Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818024724Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818033043Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818056520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818070483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818081010Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818090259Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818098205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818326501Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRu
ntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.9 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissi
ngHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:true EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818370040Z" level=info msg="Connect containerd service"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818405590Z" level=info msg="using legacy CRI server"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818449224Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818553850Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818992386Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="failed to create CRI service: failed to create cni conf monitor for default: failed to create fsnotify watcher: too many open files"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819183751Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819229690Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819274263Z" level=info msg="containerd successfully booted in 0.026038s"
	Sep 17 00:49:32 old-k8s-version-099552 systemd[1]: Started containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.397971] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.103886] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.397468] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.988018] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.115808] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396948] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.104485] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396293] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.105124] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396148] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.500649] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.569526] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 5f ef 1c 19 8f 08 06
	[ +14.523051] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 52 1e b8 75 d2 08 06
	[  +0.000432] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ba 5e 90 6a a2 f3 08 06
	[  +7.560975] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a d4 fe 64 89 0f 08 06
	[  +0.000660] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 50 38 7f fb 72 08 06
	[Sep17 00:48] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea ab 9c df 51 6c 08 06
	[  +0.000561] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 5f ef 1c 19 8f 08 06
	
	
	==> kernel <==
	 00:50:30 up  3:32,  0 users,  load average: 2.92, 2.90, 2.17
	Linux old-k8s-version-099552 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:30.199982 1106832 logs.go:279] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:30.197305     743 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:30Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:30.236745 1106832 logs.go:279] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:30.233280     755 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:30Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:30.270278 1106832 logs.go:279] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:30.267735     767 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:30Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:30.304381 1106832 logs.go:279] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:30.302016     779 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:30Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:30.337739 1106832 logs.go:279] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:30.335071     791 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:30Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:30.370686 1106832 logs.go:279] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:30.368315     803 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:30Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:30.406902 1106832 logs.go:279] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:30.403904     815 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:30Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:30.442055 1106832 logs.go:279] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:30.439532     826 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:30Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:30.476498 1106832 logs.go:279] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:30.473774     838 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:30Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"

                                                
                                                
** /stderr **
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-099552 -n old-k8s-version-099552
E0917 00:50:30.962894  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/auto-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-099552 -n old-k8s-version-099552: exit status 6 (293.79015ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:30.960629 1107136 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-099552" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "old-k8s-version-099552" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (1.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-099552" does not exist
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-099552 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-099552 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (46.033447ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-099552" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-099552 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-099552
helpers_test.go:243: (dbg) docker inspect old-k8s-version-099552:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606",
	        "Created": "2025-09-17T00:47:14.452618877Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1094443,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:49:26.538560853Z",
	            "FinishedAt": "2025-09-17T00:49:25.518757767Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/hosts",
	        "LogPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606-json.log",
	        "Name": "/old-k8s-version-099552",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-099552:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-099552",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606",
	                "LowerDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-099552",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-099552/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-099552",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-099552",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-099552",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2ddde64c24f24e3766a57841a672431afff6e67b8b55455f7a18ce1a12566fcb",
	            "SandboxKey": "/var/run/docker/netns/2ddde64c24f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33884"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33885"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33888"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33886"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33887"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-099552": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:0f:62:a1:13:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c084050a20a9e46a211e9f023f9558fec9400a691d73a4266e29ff60000fdc12",
	                    "EndpointID": "d69770401db68668497e8a9ddef6c93a77673f95c78cb11a18c567c6244c9d3d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-099552",
	                        "dc5a23440120"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-099552 -n old-k8s-version-099552
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-099552 -n old-k8s-version-099552: exit status 6 (287.532101ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:31.312684 1107315 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-099552" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-099552 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable metrics-server -p old-k8s-version-099552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ stop    │ -p old-k8s-version-099552 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable metrics-server -p newest-cni-895748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-011954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p default-k8s-diff-port-011954 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:50 UTC │
	│ stop    │ -p newest-cni-895748 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p newest-cni-895748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p newest-cni-895748 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-099552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p old-k8s-version-099552 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │                     │
	│ image   │ newest-cni-895748 image list --format=json                                                                                                                                                                                                          │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ pause   │ -p newest-cni-895748 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ unpause │ -p newest-cni-895748 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p newest-cni-895748                                                                                                                                                                                                                                │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p newest-cni-895748                                                                                                                                                                                                                                │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p disable-driver-mounts-908870                                                                                                                                                                                                                     │ disable-driver-mounts-908870 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p embed-certs-656365 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-656365           │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-305343 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ stop    │ -p no-preload-305343 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ image   │ default-k8s-diff-port-011954 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ pause   │ -p default-k8s-diff-port-011954 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ unpause │ -p default-k8s-diff-port-011954 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ delete  │ -p default-k8s-diff-port-011954                                                                                                                                                                                                                     │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ addons  │ enable dashboard -p no-preload-305343 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ start   │ -p no-preload-305343 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:50:26
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:50:26.600102 1105542 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:50:26.600372 1105542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:50:26.600383 1105542 out.go:374] Setting ErrFile to fd 2...
	I0917 00:50:26.600387 1105542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:50:26.600688 1105542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:50:26.601270 1105542 out.go:368] Setting JSON to false
	I0917 00:50:26.602722 1105542 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":12769,"bootTime":1758057458,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:50:26.602857 1105542 start.go:140] virtualization: kvm guest
	I0917 00:50:26.604653 1105542 out.go:179] * [no-preload-305343] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:50:26.605852 1105542 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:50:26.605872 1105542 notify.go:220] Checking for updates...
	I0917 00:50:26.607972 1105542 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:50:26.609198 1105542 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:50:26.610268 1105542 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0917 00:50:26.611322 1105542 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:50:26.612361 1105542 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:50:26.613681 1105542 config.go:182] Loaded profile config "no-preload-305343": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:50:26.614196 1105542 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:50:26.637278 1105542 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:50:26.637376 1105542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:50:26.690682 1105542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:50:26.681595287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:50:26.690839 1105542 docker.go:318] overlay module found
	I0917 00:50:26.692660 1105542 out.go:179] * Using the docker driver based on existing profile
	I0917 00:50:26.693532 1105542 start.go:304] selected driver: docker
	I0917 00:50:26.693548 1105542 start.go:918] validating driver "docker" against &{Name:no-preload-305343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:50:26.693646 1105542 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:50:26.694360 1105542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:50:26.747230 1105542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:50:26.73681015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:50:26.747565 1105542 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:50:26.747603 1105542 cni.go:84] Creating CNI manager for ""
	I0917 00:50:26.747674 1105542 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0917 00:50:26.747730 1105542 start.go:348] cluster config:
	{Name:no-preload-305343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:50:26.749227 1105542 out.go:179] * Starting "no-preload-305343" primary control-plane node in "no-preload-305343" cluster
	I0917 00:50:26.750402 1105542 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:50:26.751455 1105542 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:50:26.752350 1105542 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:50:26.752446 1105542 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:50:26.752497 1105542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/config.json ...
	I0917 00:50:26.752648 1105542 cache.go:107] acquiring lock: {Name:mk4909779fb0f5743ddfc059d2d0162861e84f07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752675 1105542 cache.go:107] acquiring lock: {Name:mk6df29775b0b58d1ac8dea5ffe905dd7aa0e789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752655 1105542 cache.go:107] acquiring lock: {Name:mkddf7ba64ef1815649c8a0d31e1ab341ed655cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752697 1105542 cache.go:107] acquiring lock: {Name:mk97f69e3cbe6c234e4de1197be2229ef06ba13f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752737 1105542 cache.go:107] acquiring lock: {Name:mkaa0f91c4e98db3393f92864e13e9189082e595 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752794 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0917 00:50:26.752805 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0917 00:50:26.752810 1105542 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0" took 171.127µs
	I0917 00:50:26.752837 1105542 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0917 00:50:26.752817 1105542 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 180.429µs
	I0917 00:50:26.752848 1105542 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0917 00:50:26.752844 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0917 00:50:26.752830 1105542 cache.go:107] acquiring lock: {Name:mkf4cb04c1071ecafdc32f1a85d4e090a7c4807c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752860 1105542 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0" took 124.82µs
	I0917 00:50:26.752867 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I0917 00:50:26.752870 1105542 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0917 00:50:26.752850 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0917 00:50:26.752876 1105542 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 206.283µs
	I0917 00:50:26.752885 1105542 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0917 00:50:26.752884 1105542 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0" took 209.597µs
	I0917 00:50:26.752892 1105542 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0917 00:50:26.752943 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I0917 00:50:26.752908 1105542 cache.go:107] acquiring lock: {Name:mkc74a1cd2fbc63086196dff6872225ceed330b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752953 1105542 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 170.837µs
	I0917 00:50:26.752932 1105542 cache.go:107] acquiring lock: {Name:mka95aa97be9e772922157c335bf881cd020f83a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752963 1105542 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I0917 00:50:26.753088 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0917 00:50:26.753104 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0917 00:50:26.753103 1105542 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 260.35µs
	I0917 00:50:26.753116 1105542 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0917 00:50:26.753114 1105542 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0" took 233.147µs
	I0917 00:50:26.753124 1105542 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0917 00:50:26.753139 1105542 cache.go:87] Successfully saved all images to host disk.
	I0917 00:50:26.772809 1105542 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:50:26.772828 1105542 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:50:26.772844 1105542 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:50:26.772868 1105542 start.go:360] acquireMachinesLock for no-preload-305343: {Name:mk301cc5652bfe73a264aaf61a48b9167df412f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.772918 1105542 start.go:364] duration metric: took 33.839µs to acquireMachinesLock for "no-preload-305343"
	I0917 00:50:26.772939 1105542 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:50:26.772949 1105542 fix.go:54] fixHost starting: 
	I0917 00:50:26.773161 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:26.790526 1105542 fix.go:112] recreateIfNeeded on no-preload-305343: state=Stopped err=<nil>
	W0917 00:50:26.790551 1105542 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:50:23.500084 1099270 system_pods.go:86] 8 kube-system pods found
	I0917 00:50:23.500124 1099270 system_pods.go:89] "coredns-66bc5c9577-l6kf4" [3471ccde-7a2f-40db-9b89-0b0b1d99d708] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:50:23.500134 1099270 system_pods.go:89] "etcd-embed-certs-656365" [7019686f-c4e9-4229-96b9-b0b736f1ff1f] Running
	I0917 00:50:23.500142 1099270 system_pods.go:89] "kindnet-82pzc" [98599a74-dfca-4aaa-8c5f-f5b5fa514aea] Running
	I0917 00:50:23.500148 1099270 system_pods.go:89] "kube-apiserver-embed-certs-656365" [4474c0bb-d251-4a3f-9617-05519f6f36e1] Running
	I0917 00:50:23.500156 1099270 system_pods.go:89] "kube-controller-manager-embed-certs-656365" [9d8ea73e-c495-4fc0-8e50-d21f3bbd7bf7] Running
	I0917 00:50:23.500163 1099270 system_pods.go:89] "kube-proxy-h2lgd" [ec2eebd5-b4b3-41cf-af3b-efda8464fe22] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:50:23.500173 1099270 system_pods.go:89] "kube-scheduler-embed-certs-656365" [74679934-eb97-474d-99a2-28c356aa74b4] Running
	I0917 00:50:23.500181 1099270 system_pods.go:89] "storage-provisioner" [124939ce-5cb9-41fa-b555-2b807af05792] Running
	I0917 00:50:23.500205 1099270 retry.go:31] will retry after 4.483504149s: missing components: kube-dns
	I0917 00:50:28.271397 1094183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:50:28.306326 1094183 out.go:203] 
	W0917 00:50:28.307357 1094183 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:28.301888     533 remote_runtime.go:189] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	time="2025-09-17T00:50:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0917 00:50:28.307374 1094183 out.go:285] * 
	W0917 00:50:28.309097 1094183 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:50:28.310228 1094183 out.go:203] 
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:31.951901    1033 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:31Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> containerd <==
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817851916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817861430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817871180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817881996Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817897602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817915140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817935663Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818007576Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818024724Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818033043Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818056520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818070483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818081010Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818090259Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818098205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818326501Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRu
ntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.9 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissi
ngHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:true EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818370040Z" level=info msg="Connect containerd service"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818405590Z" level=info msg="using legacy CRI server"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818449224Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818553850Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818992386Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="failed to create CRI service: failed to create cni conf monitor for default: failed to create fsnotify watcher: too many open files"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819183751Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819229690Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819274263Z" level=info msg="containerd successfully booted in 0.026038s"
	Sep 17 00:49:32 old-k8s-version-099552 systemd[1]: Started containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.397971] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.103886] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.397468] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.988018] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.115808] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396948] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.104485] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396293] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.105124] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396148] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.500649] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.569526] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 5f ef 1c 19 8f 08 06
	[ +14.523051] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 52 1e b8 75 d2 08 06
	[  +0.000432] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ba 5e 90 6a a2 f3 08 06
	[  +7.560975] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a d4 fe 64 89 0f 08 06
	[  +0.000660] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 50 38 7f fb 72 08 06
	[Sep17 00:48] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea ab 9c df 51 6c 08 06
	[  +0.000561] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 5f ef 1c 19 8f 08 06
	
	
	==> kernel <==
	 00:50:32 up  3:32,  0 users,  load average: 2.84, 2.89, 2.17
	Linux old-k8s-version-099552 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:31.619257 1107492 logs.go:279] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:31.616371     924 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:31Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:31.659927 1107492 logs.go:279] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:31.657216     936 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:31Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:31.695740 1107492 logs.go:279] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:31.693137     948 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:31Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:31.730978 1107492 logs.go:279] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:31.728273     960 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:31Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:31.768041 1107492 logs.go:279] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:31.765483     971 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:31Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:31.804480 1107492 logs.go:279] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:31.800873     982 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:31Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:31.842616 1107492 logs.go:279] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:31.839190     994 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:31Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:31.881075 1107492 logs.go:279] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:31.878166    1006 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:31Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:31.915283 1107492 logs.go:279] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:31.912556    1018 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:31Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"

                                                
                                                
** /stderr **
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-099552 -n old-k8s-version-099552
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-099552 -n old-k8s-version-099552: exit status 6 (295.45702ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:32.403234 1107901 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-099552" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "old-k8s-version-099552" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (1.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-099552 image list --format=json
start_stop_delete_test.go:302: v1.28.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.0",
- 	"registry.k8s.io/kube-controller-manager:v1.28.0",
- 	"registry.k8s.io/kube-proxy:v1.28.0",
- 	"registry.k8s.io/kube-scheduler:v1.28.0",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-099552
helpers_test.go:243: (dbg) docker inspect old-k8s-version-099552:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606",
	        "Created": "2025-09-17T00:47:14.452618877Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1094443,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:49:26.538560853Z",
	            "FinishedAt": "2025-09-17T00:49:25.518757767Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/hosts",
	        "LogPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606-json.log",
	        "Name": "/old-k8s-version-099552",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-099552:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-099552",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606",
	                "LowerDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-099552",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-099552/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-099552",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-099552",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-099552",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2ddde64c24f24e3766a57841a672431afff6e67b8b55455f7a18ce1a12566fcb",
	            "SandboxKey": "/var/run/docker/netns/2ddde64c24f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33884"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33885"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33888"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33886"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33887"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-099552": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:0f:62:a1:13:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c084050a20a9e46a211e9f023f9558fec9400a691d73a4266e29ff60000fdc12",
	                    "EndpointID": "d69770401db68668497e8a9ddef6c93a77673f95c78cb11a18c567c6244c9d3d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-099552",
	                        "dc5a23440120"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-099552 -n old-k8s-version-099552
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-099552 -n old-k8s-version-099552: exit status 6 (287.669686ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:32.964320 1108246 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-099552" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-099552 logs -n 25
E0917 00:50:33.031899  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-099552 logs -n 25: (1.029325023s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ stop    │ -p old-k8s-version-099552 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable metrics-server -p newest-cni-895748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-011954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p default-k8s-diff-port-011954 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:50 UTC │
	│ stop    │ -p newest-cni-895748 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p newest-cni-895748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p newest-cni-895748 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-099552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p old-k8s-version-099552 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │                     │
	│ image   │ newest-cni-895748 image list --format=json                                                                                                                                                                                                          │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ pause   │ -p newest-cni-895748 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ unpause │ -p newest-cni-895748 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p newest-cni-895748                                                                                                                                                                                                                                │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p newest-cni-895748                                                                                                                                                                                                                                │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p disable-driver-mounts-908870                                                                                                                                                                                                                     │ disable-driver-mounts-908870 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p embed-certs-656365 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-656365           │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-305343 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ stop    │ -p no-preload-305343 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ image   │ default-k8s-diff-port-011954 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ pause   │ -p default-k8s-diff-port-011954 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ unpause │ -p default-k8s-diff-port-011954 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ delete  │ -p default-k8s-diff-port-011954                                                                                                                                                                                                                     │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ addons  │ enable dashboard -p no-preload-305343 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ start   │ -p no-preload-305343 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │                     │
	│ image   │ old-k8s-version-099552 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:50:26
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:50:26.600102 1105542 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:50:26.600372 1105542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:50:26.600383 1105542 out.go:374] Setting ErrFile to fd 2...
	I0917 00:50:26.600387 1105542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:50:26.600688 1105542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:50:26.601270 1105542 out.go:368] Setting JSON to false
	I0917 00:50:26.602722 1105542 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":12769,"bootTime":1758057458,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:50:26.602857 1105542 start.go:140] virtualization: kvm guest
	I0917 00:50:26.604653 1105542 out.go:179] * [no-preload-305343] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:50:26.605852 1105542 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:50:26.605872 1105542 notify.go:220] Checking for updates...
	I0917 00:50:26.607972 1105542 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:50:26.609198 1105542 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:50:26.610268 1105542 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0917 00:50:26.611322 1105542 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:50:26.612361 1105542 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:50:26.613681 1105542 config.go:182] Loaded profile config "no-preload-305343": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:50:26.614196 1105542 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:50:26.637278 1105542 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:50:26.637376 1105542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:50:26.690682 1105542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:50:26.681595287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:50:26.690839 1105542 docker.go:318] overlay module found
	I0917 00:50:26.692660 1105542 out.go:179] * Using the docker driver based on existing profile
	I0917 00:50:26.693532 1105542 start.go:304] selected driver: docker
	I0917 00:50:26.693548 1105542 start.go:918] validating driver "docker" against &{Name:no-preload-305343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:50:26.693646 1105542 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:50:26.694360 1105542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:50:26.747230 1105542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:50:26.73681015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:50:26.747565 1105542 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:50:26.747603 1105542 cni.go:84] Creating CNI manager for ""
	I0917 00:50:26.747674 1105542 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0917 00:50:26.747730 1105542 start.go:348] cluster config:
	{Name:no-preload-305343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:50:26.749227 1105542 out.go:179] * Starting "no-preload-305343" primary control-plane node in "no-preload-305343" cluster
	I0917 00:50:26.750402 1105542 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:50:26.751455 1105542 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:50:26.752350 1105542 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:50:26.752446 1105542 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:50:26.752497 1105542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/config.json ...
	I0917 00:50:26.752648 1105542 cache.go:107] acquiring lock: {Name:mk4909779fb0f5743ddfc059d2d0162861e84f07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752675 1105542 cache.go:107] acquiring lock: {Name:mk6df29775b0b58d1ac8dea5ffe905dd7aa0e789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752655 1105542 cache.go:107] acquiring lock: {Name:mkddf7ba64ef1815649c8a0d31e1ab341ed655cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752697 1105542 cache.go:107] acquiring lock: {Name:mk97f69e3cbe6c234e4de1197be2229ef06ba13f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752737 1105542 cache.go:107] acquiring lock: {Name:mkaa0f91c4e98db3393f92864e13e9189082e595 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752794 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0917 00:50:26.752805 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0917 00:50:26.752810 1105542 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0" took 171.127µs
	I0917 00:50:26.752837 1105542 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0917 00:50:26.752817 1105542 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 180.429µs
	I0917 00:50:26.752848 1105542 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0917 00:50:26.752844 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0917 00:50:26.752830 1105542 cache.go:107] acquiring lock: {Name:mkf4cb04c1071ecafdc32f1a85d4e090a7c4807c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752860 1105542 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0" took 124.82µs
	I0917 00:50:26.752867 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I0917 00:50:26.752870 1105542 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0917 00:50:26.752850 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0917 00:50:26.752876 1105542 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 206.283µs
	I0917 00:50:26.752885 1105542 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0917 00:50:26.752884 1105542 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0" took 209.597µs
	I0917 00:50:26.752892 1105542 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0917 00:50:26.752943 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I0917 00:50:26.752908 1105542 cache.go:107] acquiring lock: {Name:mkc74a1cd2fbc63086196dff6872225ceed330b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752953 1105542 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 170.837µs
	I0917 00:50:26.752932 1105542 cache.go:107] acquiring lock: {Name:mka95aa97be9e772922157c335bf881cd020f83a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752963 1105542 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I0917 00:50:26.753088 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0917 00:50:26.753104 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0917 00:50:26.753103 1105542 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 260.35µs
	I0917 00:50:26.753116 1105542 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0917 00:50:26.753114 1105542 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0" took 233.147µs
	I0917 00:50:26.753124 1105542 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0917 00:50:26.753139 1105542 cache.go:87] Successfully saved all images to host disk.
	I0917 00:50:26.772809 1105542 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:50:26.772828 1105542 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:50:26.772844 1105542 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:50:26.772868 1105542 start.go:360] acquireMachinesLock for no-preload-305343: {Name:mk301cc5652bfe73a264aaf61a48b9167df412f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.772918 1105542 start.go:364] duration metric: took 33.839µs to acquireMachinesLock for "no-preload-305343"
	I0917 00:50:26.772939 1105542 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:50:26.772949 1105542 fix.go:54] fixHost starting: 
	I0917 00:50:26.773161 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:26.790526 1105542 fix.go:112] recreateIfNeeded on no-preload-305343: state=Stopped err=<nil>
	W0917 00:50:26.790551 1105542 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:50:23.500084 1099270 system_pods.go:86] 8 kube-system pods found
	I0917 00:50:23.500124 1099270 system_pods.go:89] "coredns-66bc5c9577-l6kf4" [3471ccde-7a2f-40db-9b89-0b0b1d99d708] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:50:23.500134 1099270 system_pods.go:89] "etcd-embed-certs-656365" [7019686f-c4e9-4229-96b9-b0b736f1ff1f] Running
	I0917 00:50:23.500142 1099270 system_pods.go:89] "kindnet-82pzc" [98599a74-dfca-4aaa-8c5f-f5b5fa514aea] Running
	I0917 00:50:23.500148 1099270 system_pods.go:89] "kube-apiserver-embed-certs-656365" [4474c0bb-d251-4a3f-9617-05519f6f36e1] Running
	I0917 00:50:23.500156 1099270 system_pods.go:89] "kube-controller-manager-embed-certs-656365" [9d8ea73e-c495-4fc0-8e50-d21f3bbd7bf7] Running
	I0917 00:50:23.500163 1099270 system_pods.go:89] "kube-proxy-h2lgd" [ec2eebd5-b4b3-41cf-af3b-efda8464fe22] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:50:23.500173 1099270 system_pods.go:89] "kube-scheduler-embed-certs-656365" [74679934-eb97-474d-99a2-28c356aa74b4] Running
	I0917 00:50:23.500181 1099270 system_pods.go:89] "storage-provisioner" [124939ce-5cb9-41fa-b555-2b807af05792] Running
	I0917 00:50:23.500205 1099270 retry.go:31] will retry after 4.483504149s: missing components: kube-dns
	I0917 00:50:28.271397 1094183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:50:28.306326 1094183 out.go:203] 
	W0917 00:50:28.307357 1094183 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:28.301888     533 remote_runtime.go:189] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	time="2025-09-17T00:50:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0917 00:50:28.307374 1094183 out.go:285] * 
	W0917 00:50:28.309097 1094183 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:50:28.310228 1094183 out.go:203] 
	I0917 00:50:26.792668 1105542 out.go:252] * Restarting existing docker container for "no-preload-305343" ...
	I0917 00:50:26.792734 1105542 cli_runner.go:164] Run: docker start no-preload-305343
	I0917 00:50:27.022497 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:27.039968 1105542 kic.go:430] container "no-preload-305343" state is running.
	I0917 00:50:27.040316 1105542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-305343
	I0917 00:50:27.058031 1105542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/config.json ...
	I0917 00:50:27.058242 1105542 machine.go:93] provisionDockerMachine start ...
	I0917 00:50:27.058314 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:27.076745 1105542 main.go:141] libmachine: Using SSH client type: native
	I0917 00:50:27.076991 1105542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33894 <nil> <nil>}
	I0917 00:50:27.077008 1105542 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:50:27.077565 1105542 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38760->127.0.0.1:33894: read: connection reset by peer
	I0917 00:50:30.213495 1105542 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-305343
	
	I0917 00:50:30.213525 1105542 ubuntu.go:182] provisioning hostname "no-preload-305343"
	I0917 00:50:30.213589 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:30.233747 1105542 main.go:141] libmachine: Using SSH client type: native
	I0917 00:50:30.234073 1105542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33894 <nil> <nil>}
	I0917 00:50:30.234091 1105542 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-305343 && echo "no-preload-305343" | sudo tee /etc/hostname
	I0917 00:50:30.382760 1105542 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-305343
	
	I0917 00:50:30.382841 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:30.402051 1105542 main.go:141] libmachine: Using SSH client type: native
	I0917 00:50:30.402349 1105542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33894 <nil> <nil>}
	I0917 00:50:30.402383 1105542 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-305343' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-305343/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-305343' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:50:30.537908 1105542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:50:30.537940 1105542 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:50:30.537984 1105542 ubuntu.go:190] setting up certificates
	I0917 00:50:30.537995 1105542 provision.go:84] configureAuth start
	I0917 00:50:30.538058 1105542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-305343
	I0917 00:50:30.556184 1105542 provision.go:143] copyHostCerts
	I0917 00:50:30.556260 1105542 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:50:30.556285 1105542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:50:30.556352 1105542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:50:30.556484 1105542 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:50:30.556494 1105542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:50:30.556525 1105542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:50:30.556597 1105542 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:50:30.556604 1105542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:50:30.556630 1105542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:50:30.556699 1105542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.no-preload-305343 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-305343]
	I0917 00:50:30.756241 1105542 provision.go:177] copyRemoteCerts
	I0917 00:50:30.756301 1105542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:30.756332 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:30.775252 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:30.871607 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:50:30.897809 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0917 00:50:30.924573 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:50:30.953460 1105542 provision.go:87] duration metric: took 415.445888ms to configureAuth
	I0917 00:50:30.953499 1105542 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:50:30.953714 1105542 config.go:182] Loaded profile config "no-preload-305343": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:50:30.953727 1105542 machine.go:96] duration metric: took 3.895472569s to provisionDockerMachine
	I0917 00:50:30.953736 1105542 start.go:293] postStartSetup for "no-preload-305343" (driver="docker")
	I0917 00:50:30.953749 1105542 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:50:30.953811 1105542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:50:30.953864 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:30.972069 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:31.071933 1105542 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:50:31.075759 1105542 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:50:31.075799 1105542 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:50:31.075813 1105542 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:50:31.075825 1105542 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:50:31.075837 1105542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:50:31.075902 1105542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:50:31.075982 1105542 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:50:31.076070 1105542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:50:31.085212 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:50:31.111110 1105542 start.go:296] duration metric: took 157.358327ms for postStartSetup
	I0917 00:50:31.111187 1105542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:50:31.111241 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:31.129720 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:31.222275 1105542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:50:31.226586 1105542 fix.go:56] duration metric: took 4.453629943s for fixHost
	I0917 00:50:31.226610 1105542 start.go:83] releasing machines lock for "no-preload-305343", held for 4.453677502s
	I0917 00:50:31.226680 1105542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-305343
	I0917 00:50:31.245832 1105542 ssh_runner.go:195] Run: cat /version.json
	I0917 00:50:31.245873 1105542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:50:31.245882 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:31.245943 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:31.266309 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:31.266864 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:31.456251 1105542 ssh_runner.go:195] Run: systemctl --version
	I0917 00:50:31.461506 1105542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:50:31.466079 1105542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:50:31.486672 1105542 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:50:31.486747 1105542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:50:31.495929 1105542 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:50:31.495956 1105542 start.go:495] detecting cgroup driver to use...
	I0917 00:50:31.495995 1105542 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:50:31.496042 1105542 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:50:31.509362 1105542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:50:31.520971 1105542 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:50:31.521016 1105542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:50:31.533687 1105542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:50:31.545166 1105542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:50:27.988100 1099270 system_pods.go:86] 8 kube-system pods found
	I0917 00:50:27.988133 1099270 system_pods.go:89] "coredns-66bc5c9577-l6kf4" [3471ccde-7a2f-40db-9b89-0b0b1d99d708] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:50:27.988140 1099270 system_pods.go:89] "etcd-embed-certs-656365" [7019686f-c4e9-4229-96b9-b0b736f1ff1f] Running
	I0917 00:50:27.988149 1099270 system_pods.go:89] "kindnet-82pzc" [98599a74-dfca-4aaa-8c5f-f5b5fa514aea] Running
	I0917 00:50:27.988155 1099270 system_pods.go:89] "kube-apiserver-embed-certs-656365" [4474c0bb-d251-4a3f-9617-05519f6f36e1] Running
	I0917 00:50:27.988160 1099270 system_pods.go:89] "kube-controller-manager-embed-certs-656365" [9d8ea73e-c495-4fc0-8e50-d21f3bbd7bf7] Running
	I0917 00:50:27.988165 1099270 system_pods.go:89] "kube-proxy-h2lgd" [ec2eebd5-b4b3-41cf-af3b-efda8464fe22] Running
	I0917 00:50:27.988170 1099270 system_pods.go:89] "kube-scheduler-embed-certs-656365" [74679934-eb97-474d-99a2-28c356aa74b4] Running
	I0917 00:50:27.988174 1099270 system_pods.go:89] "storage-provisioner" [124939ce-5cb9-41fa-b555-2b807af05792] Running
	I0917 00:50:27.988200 1099270 retry.go:31] will retry after 4.946737764s: missing components: kube-dns
	I0917 00:50:31.609866 1105542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:50:31.685656 1105542 docker.go:234] disabling docker service ...
	I0917 00:50:31.685721 1105542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:50:31.700024 1105542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:50:31.711577 1105542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:50:31.779451 1105542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:50:31.848784 1105542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:50:31.862220 1105542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:50:31.881443 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:50:31.891493 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:50:31.901329 1105542 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:50:31.901379 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:50:31.911827 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:50:31.922603 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:50:31.932809 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:50:31.942487 1105542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:50:31.952360 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:50:31.962527 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:50:31.972596 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:50:31.983099 1105542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:50:31.991788 1105542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:50:32.000521 1105542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:50:32.069475 1105542 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:50:32.171320 1105542 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:50:32.171376 1105542 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:50:32.176131 1105542 start.go:563] Will wait 60s for crictl version
	I0917 00:50:32.176181 1105542 ssh_runner.go:195] Run: which crictl
	I0917 00:50:32.179964 1105542 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:50:32.216471 1105542 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:50:32.216551 1105542 ssh_runner.go:195] Run: containerd --version
	I0917 00:50:32.242200 1105542 ssh_runner.go:195] Run: containerd --version
	I0917 00:50:32.268158 1105542 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:33.815503    1214 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> containerd <==
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817851916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817861430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817871180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817881996Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817897602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817915140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817935663Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818007576Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818024724Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818033043Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818056520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818070483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818081010Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818090259Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818098205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818326501Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRu
ntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.9 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissi
ngHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:true EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818370040Z" level=info msg="Connect containerd service"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818405590Z" level=info msg="using legacy CRI server"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818449224Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818553850Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818992386Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="failed to create CRI service: failed to create cni conf monitor for default: failed to create fsnotify watcher: too many open files"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819183751Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819229690Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819274263Z" level=info msg="containerd successfully booted in 0.026038s"
	Sep 17 00:49:32 old-k8s-version-099552 systemd[1]: Started containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.397971] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.103886] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.397468] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.988018] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.115808] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396948] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.104485] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396293] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.105124] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396148] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.500649] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.569526] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 5f ef 1c 19 8f 08 06
	[ +14.523051] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 52 1e b8 75 d2 08 06
	[  +0.000432] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ba 5e 90 6a a2 f3 08 06
	[  +7.560975] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a d4 fe 64 89 0f 08 06
	[  +0.000660] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 50 38 7f fb 72 08 06
	[Sep17 00:48] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea ab 9c df 51 6c 08 06
	[  +0.000561] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 5f ef 1c 19 8f 08 06
	
	
	==> kernel <==
	 00:50:33 up  3:32,  0 users,  load average: 2.84, 2.89, 2.17
	Linux old-k8s-version-099552 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:33.355508 1108415 logs.go:279] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:33.352003    1118 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:33.412227 1108415 logs.go:279] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:33.408353    1130 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:33.466136 1108415 logs.go:279] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:33.462047    1141 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:33.521621 1108415 logs.go:279] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:33.516937    1152 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:33.579022 1108415 logs.go:279] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:33.574589    1162 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:33.626404 1108415 logs.go:279] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:33.622321    1173 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:33.673891 1108415 logs.go:279] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:33.670157    1182 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:33.719446 1108415 logs.go:279] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:33.716253    1191 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:33.760927 1108415 logs.go:279] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:33.757372    1201 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"

                                                
                                                
** /stderr **
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-099552 -n old-k8s-version-099552
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-099552 -n old-k8s-version-099552: exit status 6 (324.949944ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:34.327634 1109283 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-099552" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "old-k8s-version-099552" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-099552 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-099552 --alsologtostderr -v=1: exit status 80 (2.481759755s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-099552 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:50:34.402174 1109388 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:50:34.402284 1109388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:50:34.402293 1109388 out.go:374] Setting ErrFile to fd 2...
	I0917 00:50:34.402299 1109388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:50:34.402627 1109388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:50:34.403335 1109388 out.go:368] Setting JSON to false
	I0917 00:50:34.403381 1109388 mustload.go:65] Loading cluster: old-k8s-version-099552
	I0917 00:50:34.404484 1109388 config.go:182] Loaded profile config "old-k8s-version-099552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I0917 00:50:34.405011 1109388 cli_runner.go:164] Run: docker container inspect old-k8s-version-099552 --format={{.State.Status}}
	I0917 00:50:34.424917 1109388 host.go:66] Checking if "old-k8s-version-099552" exists ...
	I0917 00:50:34.425316 1109388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:50:34.489366 1109388 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:50:34.47784738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:50:34.490048 1109388 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(
bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0/minikube-v1.37.0-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(boo
l=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-099552 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0917 00:50:34.492387 1109388 out.go:179] * Pausing node old-k8s-version-099552 ... 
	I0917 00:50:34.493344 1109388 host.go:66] Checking if "old-k8s-version-099552" exists ...
	I0917 00:50:34.493657 1109388 ssh_runner.go:195] Run: systemctl --version
	I0917 00:50:34.493698 1109388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-099552
	I0917 00:50:34.513101 1109388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33884 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/old-k8s-version-099552/id_rsa Username:docker}
	I0917 00:50:34.607750 1109388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:50:34.620468 1109388 pause.go:51] kubelet running: false
	I0917 00:50:34.620522 1109388 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0917 00:50:34.699230 1109388 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0917 00:50:34.699339 1109388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0917 00:50:34.834754 1109388 retry.go:31] will retry after 351.60782ms: list running: crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:34.733806    1286 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: kube-system,},}"
	time="2025-09-17T00:50:34Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:34.760346    1297 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: kubernetes-dashboard,},}"
	time="2025-09-17T00:50:34Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:34.791290    1308 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: storage-gluster,},}"
	time="2025-09-17T00:50:34Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:34.831471    1319 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: istio-operator,},}"
	time="2025-09-17T00:50:34Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0917 00:50:35.188235 1109388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:50:35.203103 1109388 pause.go:51] kubelet running: false
	I0917 00:50:35.203163 1109388 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0917 00:50:35.285616 1109388 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0917 00:50:35.285700 1109388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0917 00:50:35.438609 1109388 retry.go:31] will retry after 365.369133ms: list running: crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:35.330505    1349 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: kube-system,},}"
	time="2025-09-17T00:50:35Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:35.363726    1358 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: kubernetes-dashboard,},}"
	time="2025-09-17T00:50:35Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:35.397774    1368 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: storage-gluster,},}"
	time="2025-09-17T00:50:35Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:35.434804    1378 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: istio-operator,},}"
	time="2025-09-17T00:50:35Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0917 00:50:35.804182 1109388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:50:35.816488 1109388 pause.go:51] kubelet running: false
	I0917 00:50:35.816544 1109388 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0917 00:50:35.886245 1109388 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0917 00:50:35.886318 1109388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0917 00:50:36.031041 1109388 retry.go:31] will retry after 575.980658ms: list running: crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:35.918880    1407 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: kube-system,},}"
	time="2025-09-17T00:50:35Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:35.943913    1418 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: kubernetes-dashboard,},}"
	time="2025-09-17T00:50:35Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:35.979734    1429 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: storage-gluster,},}"
	time="2025-09-17T00:50:35Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:36.025575    1439 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: istio-operator,},}"
	time="2025-09-17T00:50:36Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0917 00:50:36.607695 1109388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:50:36.620630 1109388 pause.go:51] kubelet running: false
	I0917 00:50:36.620685 1109388 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0917 00:50:36.688783 1109388 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0917 00:50:36.688899 1109388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0917 00:50:36.815858 1109388 out.go:203] 
	W0917 00:50:36.816928 1109388 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:36.729791    1467 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: kube-system,},}"
	time="2025-09-17T00:50:36Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:36.759740    1477 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: kubernetes-dashboard,},}"
	time="2025-09-17T00:50:36Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:36.785356    1486 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: storage-gluster,},}"
	time="2025-09-17T00:50:36Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:36.810196    1497 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: istio-operator,},}"
	time="2025-09-17T00:50:36Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:36.729791    1467 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: kube-system,},}"
	time="2025-09-17T00:50:36Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:36.759740    1477 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: kubernetes-dashboard,},}"
	time="2025-09-17T00:50:36Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:36.785356    1486 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: storage-gluster,},}"
	time="2025-09-17T00:50:36Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:36.810196    1497 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: istio-operator,},}"
	time="2025-09-17T00:50:36Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0917 00:50:36.816948 1109388 out.go:285] * 
	* 
	W0917 00:50:36.822064 1109388 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:50:36.823288 1109388 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-099552 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-099552
helpers_test.go:243: (dbg) docker inspect old-k8s-version-099552:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606",
	        "Created": "2025-09-17T00:47:14.452618877Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1094443,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:49:26.538560853Z",
	            "FinishedAt": "2025-09-17T00:49:25.518757767Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/hosts",
	        "LogPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606-json.log",
	        "Name": "/old-k8s-version-099552",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-099552:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-099552",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606",
	                "LowerDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-099552",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-099552/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-099552",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-099552",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-099552",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2ddde64c24f24e3766a57841a672431afff6e67b8b55455f7a18ce1a12566fcb",
	            "SandboxKey": "/var/run/docker/netns/2ddde64c24f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33884"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33885"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33888"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33886"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33887"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-099552": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:0f:62:a1:13:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c084050a20a9e46a211e9f023f9558fec9400a691d73a4266e29ff60000fdc12",
	                    "EndpointID": "d69770401db68668497e8a9ddef6c93a77673f95c78cb11a18c567c6244c9d3d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-099552",
	                        "dc5a23440120"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-099552 -n old-k8s-version-099552
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-099552 -n old-k8s-version-099552: exit status 6 (276.700335ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:37.109122 1110466 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-099552" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-099552 logs -n 25
E0917 00:50:37.159906  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable metrics-server -p newest-cni-895748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-011954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p default-k8s-diff-port-011954 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:50 UTC │
	│ stop    │ -p newest-cni-895748 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p newest-cni-895748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p newest-cni-895748 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-099552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p old-k8s-version-099552 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │                     │
	│ image   │ newest-cni-895748 image list --format=json                                                                                                                                                                                                          │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ pause   │ -p newest-cni-895748 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ unpause │ -p newest-cni-895748 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p newest-cni-895748                                                                                                                                                                                                                                │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p newest-cni-895748                                                                                                                                                                                                                                │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p disable-driver-mounts-908870                                                                                                                                                                                                                     │ disable-driver-mounts-908870 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p embed-certs-656365 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-656365           │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-305343 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ stop    │ -p no-preload-305343 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ image   │ default-k8s-diff-port-011954 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ pause   │ -p default-k8s-diff-port-011954 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ unpause │ -p default-k8s-diff-port-011954 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ delete  │ -p default-k8s-diff-port-011954                                                                                                                                                                                                                     │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ addons  │ enable dashboard -p no-preload-305343 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ start   │ -p no-preload-305343 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │                     │
	│ image   │ old-k8s-version-099552 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ pause   │ -p old-k8s-version-099552 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:50:26
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:50:26.600102 1105542 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:50:26.600372 1105542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:50:26.600383 1105542 out.go:374] Setting ErrFile to fd 2...
	I0917 00:50:26.600387 1105542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:50:26.600688 1105542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:50:26.601270 1105542 out.go:368] Setting JSON to false
	I0917 00:50:26.602722 1105542 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":12769,"bootTime":1758057458,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:50:26.602857 1105542 start.go:140] virtualization: kvm guest
	I0917 00:50:26.604653 1105542 out.go:179] * [no-preload-305343] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:50:26.605852 1105542 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:50:26.605872 1105542 notify.go:220] Checking for updates...
	I0917 00:50:26.607972 1105542 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:50:26.609198 1105542 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:50:26.610268 1105542 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0917 00:50:26.611322 1105542 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:50:26.612361 1105542 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:50:26.613681 1105542 config.go:182] Loaded profile config "no-preload-305343": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:50:26.614196 1105542 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:50:26.637278 1105542 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:50:26.637376 1105542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:50:26.690682 1105542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:50:26.681595287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:50:26.690839 1105542 docker.go:318] overlay module found
	I0917 00:50:26.692660 1105542 out.go:179] * Using the docker driver based on existing profile
	I0917 00:50:26.693532 1105542 start.go:304] selected driver: docker
	I0917 00:50:26.693548 1105542 start.go:918] validating driver "docker" against &{Name:no-preload-305343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:50:26.693646 1105542 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:50:26.694360 1105542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:50:26.747230 1105542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:50:26.73681015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:50:26.747565 1105542 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:50:26.747603 1105542 cni.go:84] Creating CNI manager for ""
	I0917 00:50:26.747674 1105542 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0917 00:50:26.747730 1105542 start.go:348] cluster config:
	{Name:no-preload-305343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:50:26.749227 1105542 out.go:179] * Starting "no-preload-305343" primary control-plane node in "no-preload-305343" cluster
	I0917 00:50:26.750402 1105542 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:50:26.751455 1105542 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:50:26.752350 1105542 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:50:26.752446 1105542 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:50:26.752497 1105542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/config.json ...
	I0917 00:50:26.752648 1105542 cache.go:107] acquiring lock: {Name:mk4909779fb0f5743ddfc059d2d0162861e84f07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752675 1105542 cache.go:107] acquiring lock: {Name:mk6df29775b0b58d1ac8dea5ffe905dd7aa0e789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752655 1105542 cache.go:107] acquiring lock: {Name:mkddf7ba64ef1815649c8a0d31e1ab341ed655cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752697 1105542 cache.go:107] acquiring lock: {Name:mk97f69e3cbe6c234e4de1197be2229ef06ba13f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752737 1105542 cache.go:107] acquiring lock: {Name:mkaa0f91c4e98db3393f92864e13e9189082e595 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752794 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0917 00:50:26.752805 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0917 00:50:26.752810 1105542 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0" took 171.127µs
	I0917 00:50:26.752837 1105542 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0917 00:50:26.752817 1105542 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 180.429µs
	I0917 00:50:26.752848 1105542 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0917 00:50:26.752844 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0917 00:50:26.752830 1105542 cache.go:107] acquiring lock: {Name:mkf4cb04c1071ecafdc32f1a85d4e090a7c4807c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752860 1105542 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0" took 124.82µs
	I0917 00:50:26.752867 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I0917 00:50:26.752870 1105542 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0917 00:50:26.752850 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0917 00:50:26.752876 1105542 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 206.283µs
	I0917 00:50:26.752885 1105542 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0917 00:50:26.752884 1105542 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0" took 209.597µs
	I0917 00:50:26.752892 1105542 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0917 00:50:26.752943 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I0917 00:50:26.752908 1105542 cache.go:107] acquiring lock: {Name:mkc74a1cd2fbc63086196dff6872225ceed330b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752953 1105542 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 170.837µs
	I0917 00:50:26.752932 1105542 cache.go:107] acquiring lock: {Name:mka95aa97be9e772922157c335bf881cd020f83a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752963 1105542 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I0917 00:50:26.753088 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0917 00:50:26.753104 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0917 00:50:26.753103 1105542 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 260.35µs
	I0917 00:50:26.753116 1105542 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0917 00:50:26.753114 1105542 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0" took 233.147µs
	I0917 00:50:26.753124 1105542 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0917 00:50:26.753139 1105542 cache.go:87] Successfully saved all images to host disk.
	I0917 00:50:26.772809 1105542 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:50:26.772828 1105542 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:50:26.772844 1105542 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:50:26.772868 1105542 start.go:360] acquireMachinesLock for no-preload-305343: {Name:mk301cc5652bfe73a264aaf61a48b9167df412f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.772918 1105542 start.go:364] duration metric: took 33.839µs to acquireMachinesLock for "no-preload-305343"
	I0917 00:50:26.772939 1105542 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:50:26.772949 1105542 fix.go:54] fixHost starting: 
	I0917 00:50:26.773161 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:26.790526 1105542 fix.go:112] recreateIfNeeded on no-preload-305343: state=Stopped err=<nil>
	W0917 00:50:26.790551 1105542 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:50:23.500084 1099270 system_pods.go:86] 8 kube-system pods found
	I0917 00:50:23.500124 1099270 system_pods.go:89] "coredns-66bc5c9577-l6kf4" [3471ccde-7a2f-40db-9b89-0b0b1d99d708] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:50:23.500134 1099270 system_pods.go:89] "etcd-embed-certs-656365" [7019686f-c4e9-4229-96b9-b0b736f1ff1f] Running
	I0917 00:50:23.500142 1099270 system_pods.go:89] "kindnet-82pzc" [98599a74-dfca-4aaa-8c5f-f5b5fa514aea] Running
	I0917 00:50:23.500148 1099270 system_pods.go:89] "kube-apiserver-embed-certs-656365" [4474c0bb-d251-4a3f-9617-05519f6f36e1] Running
	I0917 00:50:23.500156 1099270 system_pods.go:89] "kube-controller-manager-embed-certs-656365" [9d8ea73e-c495-4fc0-8e50-d21f3bbd7bf7] Running
	I0917 00:50:23.500163 1099270 system_pods.go:89] "kube-proxy-h2lgd" [ec2eebd5-b4b3-41cf-af3b-efda8464fe22] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:50:23.500173 1099270 system_pods.go:89] "kube-scheduler-embed-certs-656365" [74679934-eb97-474d-99a2-28c356aa74b4] Running
	I0917 00:50:23.500181 1099270 system_pods.go:89] "storage-provisioner" [124939ce-5cb9-41fa-b555-2b807af05792] Running
	I0917 00:50:23.500205 1099270 retry.go:31] will retry after 4.483504149s: missing components: kube-dns
	I0917 00:50:28.271397 1094183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:50:28.306326 1094183 out.go:203] 
	W0917 00:50:28.307357 1094183 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:28.301888     533 remote_runtime.go:189] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	time="2025-09-17T00:50:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0917 00:50:28.307374 1094183 out.go:285] * 
	W0917 00:50:28.309097 1094183 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:50:28.310228 1094183 out.go:203] 
	I0917 00:50:26.792668 1105542 out.go:252] * Restarting existing docker container for "no-preload-305343" ...
	I0917 00:50:26.792734 1105542 cli_runner.go:164] Run: docker start no-preload-305343
	I0917 00:50:27.022497 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:27.039968 1105542 kic.go:430] container "no-preload-305343" state is running.
	I0917 00:50:27.040316 1105542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-305343
	I0917 00:50:27.058031 1105542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/config.json ...
	I0917 00:50:27.058242 1105542 machine.go:93] provisionDockerMachine start ...
	I0917 00:50:27.058314 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:27.076745 1105542 main.go:141] libmachine: Using SSH client type: native
	I0917 00:50:27.076991 1105542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33894 <nil> <nil>}
	I0917 00:50:27.077008 1105542 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:50:27.077565 1105542 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38760->127.0.0.1:33894: read: connection reset by peer
	I0917 00:50:30.213495 1105542 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-305343
	
	I0917 00:50:30.213525 1105542 ubuntu.go:182] provisioning hostname "no-preload-305343"
	I0917 00:50:30.213589 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:30.233747 1105542 main.go:141] libmachine: Using SSH client type: native
	I0917 00:50:30.234073 1105542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33894 <nil> <nil>}
	I0917 00:50:30.234091 1105542 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-305343 && echo "no-preload-305343" | sudo tee /etc/hostname
	I0917 00:50:30.382760 1105542 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-305343
	
	I0917 00:50:30.382841 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:30.402051 1105542 main.go:141] libmachine: Using SSH client type: native
	I0917 00:50:30.402349 1105542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33894 <nil> <nil>}
	I0917 00:50:30.402383 1105542 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-305343' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-305343/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-305343' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:50:30.537908 1105542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:50:30.537940 1105542 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:50:30.537984 1105542 ubuntu.go:190] setting up certificates
	I0917 00:50:30.537995 1105542 provision.go:84] configureAuth start
	I0917 00:50:30.538058 1105542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-305343
	I0917 00:50:30.556184 1105542 provision.go:143] copyHostCerts
	I0917 00:50:30.556260 1105542 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:50:30.556285 1105542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:50:30.556352 1105542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:50:30.556484 1105542 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:50:30.556494 1105542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:50:30.556525 1105542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:50:30.556597 1105542 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:50:30.556604 1105542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:50:30.556630 1105542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:50:30.556699 1105542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.no-preload-305343 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-305343]
	I0917 00:50:30.756241 1105542 provision.go:177] copyRemoteCerts
	I0917 00:50:30.756301 1105542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:30.756332 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:30.775252 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:30.871607 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:50:30.897809 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0917 00:50:30.924573 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:50:30.953460 1105542 provision.go:87] duration metric: took 415.445888ms to configureAuth
	I0917 00:50:30.953499 1105542 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:50:30.953714 1105542 config.go:182] Loaded profile config "no-preload-305343": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:50:30.953727 1105542 machine.go:96] duration metric: took 3.895472569s to provisionDockerMachine
	I0917 00:50:30.953736 1105542 start.go:293] postStartSetup for "no-preload-305343" (driver="docker")
	I0917 00:50:30.953749 1105542 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:50:30.953811 1105542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:50:30.953864 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:30.972069 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:31.071933 1105542 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:50:31.075759 1105542 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:50:31.075799 1105542 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:50:31.075813 1105542 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:50:31.075825 1105542 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:50:31.075837 1105542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:50:31.075902 1105542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:50:31.075982 1105542 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:50:31.076070 1105542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:50:31.085212 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:50:31.111110 1105542 start.go:296] duration metric: took 157.358327ms for postStartSetup
	I0917 00:50:31.111187 1105542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:50:31.111241 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:31.129720 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:31.222275 1105542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:50:31.226586 1105542 fix.go:56] duration metric: took 4.453629943s for fixHost
	I0917 00:50:31.226610 1105542 start.go:83] releasing machines lock for "no-preload-305343", held for 4.453677502s
	I0917 00:50:31.226680 1105542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-305343
	I0917 00:50:31.245832 1105542 ssh_runner.go:195] Run: cat /version.json
	I0917 00:50:31.245873 1105542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:50:31.245882 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:31.245943 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:31.266309 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:31.266864 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:31.456251 1105542 ssh_runner.go:195] Run: systemctl --version
	I0917 00:50:31.461506 1105542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:50:31.466079 1105542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:50:31.486672 1105542 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:50:31.486747 1105542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:50:31.495929 1105542 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:50:31.495956 1105542 start.go:495] detecting cgroup driver to use...
	I0917 00:50:31.495995 1105542 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:50:31.496042 1105542 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:50:31.509362 1105542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:50:31.520971 1105542 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:50:31.521016 1105542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:50:31.533687 1105542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:50:31.545166 1105542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:50:27.988100 1099270 system_pods.go:86] 8 kube-system pods found
	I0917 00:50:27.988133 1099270 system_pods.go:89] "coredns-66bc5c9577-l6kf4" [3471ccde-7a2f-40db-9b89-0b0b1d99d708] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:50:27.988140 1099270 system_pods.go:89] "etcd-embed-certs-656365" [7019686f-c4e9-4229-96b9-b0b736f1ff1f] Running
	I0917 00:50:27.988149 1099270 system_pods.go:89] "kindnet-82pzc" [98599a74-dfca-4aaa-8c5f-f5b5fa514aea] Running
	I0917 00:50:27.988155 1099270 system_pods.go:89] "kube-apiserver-embed-certs-656365" [4474c0bb-d251-4a3f-9617-05519f6f36e1] Running
	I0917 00:50:27.988160 1099270 system_pods.go:89] "kube-controller-manager-embed-certs-656365" [9d8ea73e-c495-4fc0-8e50-d21f3bbd7bf7] Running
	I0917 00:50:27.988165 1099270 system_pods.go:89] "kube-proxy-h2lgd" [ec2eebd5-b4b3-41cf-af3b-efda8464fe22] Running
	I0917 00:50:27.988170 1099270 system_pods.go:89] "kube-scheduler-embed-certs-656365" [74679934-eb97-474d-99a2-28c356aa74b4] Running
	I0917 00:50:27.988174 1099270 system_pods.go:89] "storage-provisioner" [124939ce-5cb9-41fa-b555-2b807af05792] Running
	I0917 00:50:27.988200 1099270 retry.go:31] will retry after 4.946737764s: missing components: kube-dns
	I0917 00:50:31.609866 1105542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:50:31.685656 1105542 docker.go:234] disabling docker service ...
	I0917 00:50:31.685721 1105542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:50:31.700024 1105542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:50:31.711577 1105542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:50:31.779451 1105542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:50:31.848784 1105542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:50:31.862220 1105542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:50:31.881443 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:50:31.891493 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:50:31.901329 1105542 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:50:31.901379 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:50:31.911827 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:50:31.922603 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:50:31.932809 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:50:31.942487 1105542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:50:31.952360 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:50:31.962527 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:50:31.972596 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:50:31.983099 1105542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:50:31.991788 1105542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:50:32.000521 1105542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:50:32.069475 1105542 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:50:32.171320 1105542 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:50:32.171376 1105542 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:50:32.176131 1105542 start.go:563] Will wait 60s for crictl version
	I0917 00:50:32.176181 1105542 ssh_runner.go:195] Run: which crictl
	I0917 00:50:32.179964 1105542 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:50:32.216471 1105542 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:50:32.216551 1105542 ssh_runner.go:195] Run: containerd --version
	I0917 00:50:32.242200 1105542 ssh_runner.go:195] Run: containerd --version
	I0917 00:50:32.268158 1105542 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:50:32.269243 1105542 cli_runner.go:164] Run: docker network inspect no-preload-305343 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:50:32.286307 1105542 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0917 00:50:32.290109 1105542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:50:32.301499 1105542 kubeadm.go:875] updating cluster {Name:no-preload-305343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:50:32.301638 1105542 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:50:32.301684 1105542 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:50:32.339717 1105542 containerd.go:627] all images are preloaded for containerd runtime.
	I0917 00:50:32.339740 1105542 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:50:32.339749 1105542 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.34.0 containerd true true} ...
	I0917 00:50:32.339869 1105542 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-305343 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:50:32.339944 1105542 ssh_runner.go:195] Run: sudo crictl info
	I0917 00:50:32.376814 1105542 cni.go:84] Creating CNI manager for ""
	I0917 00:50:32.376836 1105542 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0917 00:50:32.376849 1105542 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:50:32.376872 1105542 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-305343 NodeName:no-preload-305343 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:50:32.376990 1105542 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-305343"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:50:32.377060 1105542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:50:32.387103 1105542 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:50:32.387156 1105542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 00:50:32.397201 1105542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0917 00:50:32.415962 1105542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:50:32.434721 1105542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I0917 00:50:32.454136 1105542 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0917 00:50:32.457724 1105542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:50:32.469156 1105542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:50:32.536961 1105542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:50:32.560468 1105542 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343 for IP: 192.168.103.2
	I0917 00:50:32.560485 1105542 certs.go:194] generating shared ca certs ...
	I0917 00:50:32.560505 1105542 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:50:32.560632 1105542 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:50:32.560670 1105542 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:50:32.560680 1105542 certs.go:256] generating profile certs ...
	I0917 00:50:32.560755 1105542 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/client.key
	I0917 00:50:32.560808 1105542 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/apiserver.key.ccbdf892
	I0917 00:50:32.560855 1105542 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/proxy-client.key
	I0917 00:50:32.560965 1105542 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:50:32.561003 1105542 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:50:32.561013 1105542 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:50:32.561039 1105542 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:50:32.561060 1105542 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:50:32.561084 1105542 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:50:32.561121 1105542 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:50:32.561753 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:50:32.587079 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:50:32.614113 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:50:32.645555 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:50:32.676563 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 00:50:32.702407 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:50:32.727828 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:50:32.756314 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 00:50:32.784212 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:50:32.810459 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:50:32.834131 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:50:32.857459 1105542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:50:32.874913 1105542 ssh_runner.go:195] Run: openssl version
	I0917 00:50:32.880315 1105542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:50:32.890355 1105542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:50:32.894019 1105542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:50:32.894073 1105542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:50:32.901292 1105542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:50:32.910006 1105542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:50:32.920695 1105542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:50:32.925468 1105542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:50:32.925518 1105542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:50:32.933107 1105542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:50:32.944342 1105542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:50:32.954775 1105542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:50:32.958957 1105542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:50:32.959013 1105542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:50:32.966492 1105542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:50:32.975584 1105542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:50:32.979076 1105542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:50:32.986429 1105542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:50:32.993046 1105542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:50:32.999821 1105542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:50:33.006600 1105542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:50:33.013633 1105542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:50:33.020609 1105542 kubeadm.go:392] StartCluster: {Name:no-preload-305343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:50:33.020692 1105542 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0917 00:50:33.020742 1105542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:50:33.057726 1105542 cri.go:89] found id: "be727acb2413cb62fed400065b77a2d8563f6d702ad011f84625f93046d14fa8"
	I0917 00:50:33.057751 1105542 cri.go:89] found id: "2b254e4e0a0913ce22ca281722e2f2a1770d019ec5cb32075a2454bb408e22b6"
	I0917 00:50:33.057756 1105542 cri.go:89] found id: "eadc8234c1f716492213f4ceca0728ba4a8c39da932123890849560c7b9720bb"
	I0917 00:50:33.057761 1105542 cri.go:89] found id: "91590efa3cb9a0c2d0bd6417fedf19627c4a6660552820f8106925860be386a9"
	I0917 00:50:33.057766 1105542 cri.go:89] found id: "ad80eddc5416b6ae6865995f75f38026168b4b99c810f8838d86bfec63fa39ae"
	I0917 00:50:33.057771 1105542 cri.go:89] found id: "4a0e86b6bd8d5d5d1cabb8dcdf6cfc8320af52b7e9169b57ed5e4e130d49dff2"
	I0917 00:50:33.057775 1105542 cri.go:89] found id: "c93b4c8d7d7ec1ea6230427e1f23c4d57729b33e12e06f8005808cd41870e3e6"
	I0917 00:50:33.057780 1105542 cri.go:89] found id: "f746107f720e032ae0dac8b9d00658f03da9bc2d049aa1899f0b5d09ce38aecc"
	I0917 00:50:33.057797 1105542 cri.go:89] found id: ""
	I0917 00:50:33.057846 1105542 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0917 00:50:33.072596 1105542 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T00:50:33Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0917 00:50:33.072659 1105542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:50:33.083493 1105542 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:50:33.083567 1105542 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:50:33.083620 1105542 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:50:33.097898 1105542 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:50:33.098968 1105542 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-305343" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:50:33.099672 1105542 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-749120/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-305343" cluster setting kubeconfig missing "no-preload-305343" context setting]
	I0917 00:50:33.100686 1105542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:50:33.102853 1105542 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:50:33.116499 1105542 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.103.2
	I0917 00:50:33.116539 1105542 kubeadm.go:593] duration metric: took 32.96128ms to restartPrimaryControlPlane
	I0917 00:50:33.116552 1105542 kubeadm.go:394] duration metric: took 95.951287ms to StartCluster
	I0917 00:50:33.116574 1105542 settings.go:142] acquiring lock: {Name:mk6c1a5bee23e141aad5180323c16c47ed580ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:50:33.116635 1105542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:50:33.118404 1105542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:50:33.118866 1105542 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:50:33.119205 1105542 config.go:182] Loaded profile config "no-preload-305343": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:50:33.119256 1105542 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:50:33.119338 1105542 addons.go:69] Setting storage-provisioner=true in profile "no-preload-305343"
	I0917 00:50:33.119362 1105542 addons.go:238] Setting addon storage-provisioner=true in "no-preload-305343"
	W0917 00:50:33.119371 1105542 addons.go:247] addon storage-provisioner should already be in state true
	I0917 00:50:33.119396 1105542 host.go:66] Checking if "no-preload-305343" exists ...
	I0917 00:50:33.119755 1105542 addons.go:69] Setting default-storageclass=true in profile "no-preload-305343"
	I0917 00:50:33.119773 1105542 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-305343"
	I0917 00:50:33.120107 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:33.120165 1105542 addons.go:69] Setting metrics-server=true in profile "no-preload-305343"
	I0917 00:50:33.120185 1105542 addons.go:238] Setting addon metrics-server=true in "no-preload-305343"
	W0917 00:50:33.120193 1105542 addons.go:247] addon metrics-server should already be in state true
	I0917 00:50:33.120222 1105542 host.go:66] Checking if "no-preload-305343" exists ...
	I0917 00:50:33.121014 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:33.121115 1105542 addons.go:69] Setting dashboard=true in profile "no-preload-305343"
	I0917 00:50:33.121436 1105542 addons.go:238] Setting addon dashboard=true in "no-preload-305343"
	W0917 00:50:33.121452 1105542 addons.go:247] addon dashboard should already be in state true
	I0917 00:50:33.121490 1105542 host.go:66] Checking if "no-preload-305343" exists ...
	I0917 00:50:33.121902 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:33.122278 1105542 out.go:179] * Verifying Kubernetes components...
	I0917 00:50:33.122357 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:33.123546 1105542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:50:33.167058 1105542 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 00:50:33.168115 1105542 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 00:50:33.171091 1105542 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:50:33.171119 1105542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 00:50:33.171183 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:33.173949 1105542 addons.go:238] Setting addon default-storageclass=true in "no-preload-305343"
	W0917 00:50:33.174269 1105542 addons.go:247] addon default-storageclass should already be in state true
	I0917 00:50:33.174232 1105542 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 00:50:33.174392 1105542 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 00:50:33.174612 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:33.177380 1105542 host.go:66] Checking if "no-preload-305343" exists ...
	I0917 00:50:33.177760 1105542 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0917 00:50:33.177899 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:33.181837 1105542 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0917 00:50:33.182786 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0917 00:50:33.182820 1105542 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0917 00:50:33.182875 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:33.201609 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:33.201953 1105542 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 00:50:33.203128 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:33.203534 1105542 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 00:50:33.203626 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:33.208310 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:33.225830 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:33.262072 1105542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:50:33.292201 1105542 node_ready.go:35] waiting up to 6m0s for node "no-preload-305343" to be "Ready" ...
	I0917 00:50:33.323048 1105542 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 00:50:33.323076 1105542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 00:50:33.329152 1105542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:50:33.336925 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0917 00:50:33.336952 1105542 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0917 00:50:33.356568 1105542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 00:50:33.359137 1105542 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 00:50:33.359190 1105542 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 00:50:33.364438 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0917 00:50:33.364465 1105542 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0917 00:50:33.391853 1105542 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 00:50:33.391881 1105542 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 00:50:33.395911 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0917 00:50:33.395935 1105542 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0917 00:50:33.424313 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0917 00:50:33.424344 1105542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0917 00:50:33.424732 1105542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0917 00:50:33.427733 1105542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:50:33.427913 1105542 retry.go:31] will retry after 328.317114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0917 00:50:33.449613 1105542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:50:33.449658 1105542 retry.go:31] will retry after 289.134032ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:50:33.461946 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0917 00:50:33.461976 1105542 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0917 00:50:33.494149 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0917 00:50:33.494179 1105542 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0917 00:50:33.519674 1105542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:50:33.519720 1105542 retry.go:31] will retry after 266.352131ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:50:33.522886 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0917 00:50:33.522948 1105542 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0917 00:50:33.554013 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0917 00:50:33.554049 1105542 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0917 00:50:33.585811 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0917 00:50:33.585864 1105542 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0917 00:50:33.615550 1105542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0917 00:50:33.739006 1105542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0917 00:50:33.756455 1105542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:50:33.787125 1105542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 00:50:35.018063 1105542 node_ready.go:49] node "no-preload-305343" is "Ready"
	I0917 00:50:35.018105 1105542 node_ready.go:38] duration metric: took 1.725867509s for node "no-preload-305343" to be "Ready" ...
	I0917 00:50:35.018134 1105542 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:50:35.018194 1105542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:50:35.485441 1105542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.869819068s)
	I0917 00:50:35.485470 1105542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.746425045s)
	I0917 00:50:35.486676 1105542 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-305343 addons enable metrics-server
	
	I0917 00:50:35.585159 1105542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.828669148s)
	I0917 00:50:35.596571 1105542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.809405331s)
	I0917 00:50:35.596607 1105542 addons.go:479] Verifying addon metrics-server=true in "no-preload-305343"
	I0917 00:50:35.596649 1105542 api_server.go:72] duration metric: took 2.477742727s to wait for apiserver process to appear ...
	I0917 00:50:35.596672 1105542 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:50:35.596692 1105542 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:50:35.598000 1105542 out.go:179] * Enabled addons: dashboard, default-storageclass, storage-provisioner, metrics-server
	I0917 00:50:35.598959 1105542 addons.go:514] duration metric: took 2.479706242s for enable addons: enabled=[dashboard default-storageclass storage-provisioner metrics-server]
	I0917 00:50:35.600549 1105542 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:50:35.600575 1105542 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:50:36.096781 1105542 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:50:36.102339 1105542 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:50:36.102365 1105542 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:50:36.596805 1105542 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:50:32.939849 1099270 system_pods.go:86] 8 kube-system pods found
	I0917 00:50:32.939879 1099270 system_pods.go:89] "coredns-66bc5c9577-l6kf4" [3471ccde-7a2f-40db-9b89-0b0b1d99d708] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:50:32.939885 1099270 system_pods.go:89] "etcd-embed-certs-656365" [7019686f-c4e9-4229-96b9-b0b736f1ff1f] Running
	I0917 00:50:32.939891 1099270 system_pods.go:89] "kindnet-82pzc" [98599a74-dfca-4aaa-8c5f-f5b5fa514aea] Running
	I0917 00:50:32.939896 1099270 system_pods.go:89] "kube-apiserver-embed-certs-656365" [4474c0bb-d251-4a3f-9617-05519f6f36e1] Running
	I0917 00:50:32.939900 1099270 system_pods.go:89] "kube-controller-manager-embed-certs-656365" [9d8ea73e-c495-4fc0-8e50-d21f3bbd7bf7] Running
	I0917 00:50:32.939904 1099270 system_pods.go:89] "kube-proxy-h2lgd" [ec2eebd5-b4b3-41cf-af3b-efda8464fe22] Running
	I0917 00:50:32.939907 1099270 system_pods.go:89] "kube-scheduler-embed-certs-656365" [74679934-eb97-474d-99a2-28c356aa74b4] Running
	I0917 00:50:32.939911 1099270 system_pods.go:89] "storage-provisioner" [124939ce-5cb9-41fa-b555-2b807af05792] Running
	I0917 00:50:32.939925 1099270 retry.go:31] will retry after 7.48327625s: missing components: kube-dns
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:37.753214    1635 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:37Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> containerd <==
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817851916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817861430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817871180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817881996Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817897602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817915140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817935663Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818007576Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818024724Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818033043Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818056520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818070483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818081010Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818090259Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818098205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818326501Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRu
ntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.9 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissi
ngHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:true EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818370040Z" level=info msg="Connect containerd service"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818405590Z" level=info msg="using legacy CRI server"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818449224Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818553850Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818992386Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="failed to create CRI service: failed to create cni conf monitor for default: failed to create fsnotify watcher: too many open files"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819183751Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819229690Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819274263Z" level=info msg="containerd successfully booted in 0.026038s"
	Sep 17 00:49:32 old-k8s-version-099552 systemd[1]: Started containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.397971] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.103886] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.397468] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.988018] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.115808] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396948] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.104485] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396293] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.105124] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396148] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.500649] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.569526] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 5f ef 1c 19 8f 08 06
	[ +14.523051] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 52 1e b8 75 d2 08 06
	[  +0.000432] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ba 5e 90 6a a2 f3 08 06
	[  +7.560975] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a d4 fe 64 89 0f 08 06
	[  +0.000660] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 50 38 7f fb 72 08 06
	[Sep17 00:48] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea ab 9c df 51 6c 08 06
	[  +0.000561] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 5f ef 1c 19 8f 08 06
	
	
	==> kernel <==
	 00:50:37 up  3:32,  0 users,  load average: 2.77, 2.87, 2.17
	Linux old-k8s-version-099552 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:37.414683 1110589 logs.go:279] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:37.412175    1524 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:37Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:37.447936 1110589 logs.go:279] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:37.445384    1536 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:37Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:37.481822 1110589 logs.go:279] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:37.479040    1548 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:37Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:37.515857 1110589 logs.go:279] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:37.513079    1560 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:37Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:37.549921 1110589 logs.go:279] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:37.547178    1572 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:37Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:37.586517 1110589 logs.go:279] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:37.583425    1584 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:37Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:37.629497 1110589 logs.go:279] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:37.626320    1596 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:37Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:37.668828 1110589 logs.go:279] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:37.665838    1608 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:37Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:37.711207 1110589 logs.go:279] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:37.707309    1620 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:37Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"

                                                
                                                
** /stderr **
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-099552 -n old-k8s-version-099552
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-099552 -n old-k8s-version-099552: exit status 6 (305.690449ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:38.237579 1110870 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-099552" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "old-k8s-version-099552" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-099552
helpers_test.go:243: (dbg) docker inspect old-k8s-version-099552:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606",
	        "Created": "2025-09-17T00:47:14.452618877Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1094443,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:49:26.538560853Z",
	            "FinishedAt": "2025-09-17T00:49:25.518757767Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/hosts",
	        "LogPath": "/var/lib/docker/containers/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606/dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606-json.log",
	        "Name": "/old-k8s-version-099552",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-099552:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-099552",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc5a2344012058adc433d4d94bb8f0cfb2d2d9a3fcc4c579e400734d708bf606",
	                "LowerDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e-init/diff:/var/lib/docker/overlay2/949a3fbecd0c2c005aa419b7ddc214e7bf4333225d7b227e8b0d0ea188b945ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a917f22cc5c8d72ade6fe744606e8b48681ead29c7cc52e99c757cb3866734e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-099552",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-099552/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-099552",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-099552",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-099552",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2ddde64c24f24e3766a57841a672431afff6e67b8b55455f7a18ce1a12566fcb",
	            "SandboxKey": "/var/run/docker/netns/2ddde64c24f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33884"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33885"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33888"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33886"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33887"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-099552": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:0f:62:a1:13:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c084050a20a9e46a211e9f023f9558fec9400a691d73a4266e29ff60000fdc12",
	                    "EndpointID": "d69770401db68668497e8a9ddef6c93a77673f95c78cb11a18c567c6244c9d3d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-099552",
	                        "dc5a23440120"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-099552 -n old-k8s-version-099552
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-099552 -n old-k8s-version-099552: exit status 6 (279.321772ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:38.535851 1111028 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-099552" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-099552 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable metrics-server -p newest-cni-895748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-011954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p default-k8s-diff-port-011954 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:50 UTC │
	│ stop    │ -p newest-cni-895748 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p newest-cni-895748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p newest-cni-895748 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-099552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p old-k8s-version-099552 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │                     │
	│ image   │ newest-cni-895748 image list --format=json                                                                                                                                                                                                          │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ pause   │ -p newest-cni-895748 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ unpause │ -p newest-cni-895748 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p newest-cni-895748                                                                                                                                                                                                                                │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p newest-cni-895748                                                                                                                                                                                                                                │ newest-cni-895748            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p disable-driver-mounts-908870                                                                                                                                                                                                                     │ disable-driver-mounts-908870 │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p embed-certs-656365 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-656365           │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-305343 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ stop    │ -p no-preload-305343 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ image   │ default-k8s-diff-port-011954 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ pause   │ -p default-k8s-diff-port-011954 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ unpause │ -p default-k8s-diff-port-011954 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ delete  │ -p default-k8s-diff-port-011954                                                                                                                                                                                                                     │ default-k8s-diff-port-011954 │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ addons  │ enable dashboard -p no-preload-305343 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ start   │ -p no-preload-305343 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-305343            │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │                     │
	│ image   │ old-k8s-version-099552 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:50 UTC │
	│ pause   │ -p old-k8s-version-099552 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-099552       │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:50:26
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:50:26.600102 1105542 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:50:26.600372 1105542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:50:26.600383 1105542 out.go:374] Setting ErrFile to fd 2...
	I0917 00:50:26.600387 1105542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:50:26.600688 1105542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:50:26.601270 1105542 out.go:368] Setting JSON to false
	I0917 00:50:26.602722 1105542 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":12769,"bootTime":1758057458,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:50:26.602857 1105542 start.go:140] virtualization: kvm guest
	I0917 00:50:26.604653 1105542 out.go:179] * [no-preload-305343] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:50:26.605852 1105542 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:50:26.605872 1105542 notify.go:220] Checking for updates...
	I0917 00:50:26.607972 1105542 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:50:26.609198 1105542 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:50:26.610268 1105542 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0917 00:50:26.611322 1105542 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:50:26.612361 1105542 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:50:26.613681 1105542 config.go:182] Loaded profile config "no-preload-305343": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:50:26.614196 1105542 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:50:26.637278 1105542 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:50:26.637376 1105542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:50:26.690682 1105542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:50:26.681595287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:50:26.690839 1105542 docker.go:318] overlay module found
	I0917 00:50:26.692660 1105542 out.go:179] * Using the docker driver based on existing profile
	I0917 00:50:26.693532 1105542 start.go:304] selected driver: docker
	I0917 00:50:26.693548 1105542 start.go:918] validating driver "docker" against &{Name:no-preload-305343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:50:26.693646 1105542 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:50:26.694360 1105542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:50:26.747230 1105542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:50:26.73681015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:50:26.747565 1105542 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:50:26.747603 1105542 cni.go:84] Creating CNI manager for ""
	I0917 00:50:26.747674 1105542 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0917 00:50:26.747730 1105542 start.go:348] cluster config:
	{Name:no-preload-305343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:50:26.749227 1105542 out.go:179] * Starting "no-preload-305343" primary control-plane node in "no-preload-305343" cluster
	I0917 00:50:26.750402 1105542 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0917 00:50:26.751455 1105542 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:50:26.752350 1105542 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:50:26.752446 1105542 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:50:26.752497 1105542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/config.json ...
	I0917 00:50:26.752648 1105542 cache.go:107] acquiring lock: {Name:mk4909779fb0f5743ddfc059d2d0162861e84f07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752675 1105542 cache.go:107] acquiring lock: {Name:mk6df29775b0b58d1ac8dea5ffe905dd7aa0e789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752655 1105542 cache.go:107] acquiring lock: {Name:mkddf7ba64ef1815649c8a0d31e1ab341ed655cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752697 1105542 cache.go:107] acquiring lock: {Name:mk97f69e3cbe6c234e4de1197be2229ef06ba13f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752737 1105542 cache.go:107] acquiring lock: {Name:mkaa0f91c4e98db3393f92864e13e9189082e595 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752794 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0917 00:50:26.752805 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0917 00:50:26.752810 1105542 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0" took 171.127µs
	I0917 00:50:26.752837 1105542 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0917 00:50:26.752817 1105542 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 180.429µs
	I0917 00:50:26.752848 1105542 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0917 00:50:26.752844 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0917 00:50:26.752830 1105542 cache.go:107] acquiring lock: {Name:mkf4cb04c1071ecafdc32f1a85d4e090a7c4807c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752860 1105542 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0" took 124.82µs
	I0917 00:50:26.752867 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I0917 00:50:26.752870 1105542 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0917 00:50:26.752850 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0917 00:50:26.752876 1105542 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 206.283µs
	I0917 00:50:26.752885 1105542 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0917 00:50:26.752884 1105542 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0" took 209.597µs
	I0917 00:50:26.752892 1105542 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0917 00:50:26.752943 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I0917 00:50:26.752908 1105542 cache.go:107] acquiring lock: {Name:mkc74a1cd2fbc63086196dff6872225ceed330b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752953 1105542 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 170.837µs
	I0917 00:50:26.752932 1105542 cache.go:107] acquiring lock: {Name:mka95aa97be9e772922157c335bf881cd020f83a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.752963 1105542 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I0917 00:50:26.753088 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0917 00:50:26.753104 1105542 cache.go:115] /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0917 00:50:26.753103 1105542 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 260.35µs
	I0917 00:50:26.753116 1105542 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0917 00:50:26.753114 1105542 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0" took 233.147µs
	I0917 00:50:26.753124 1105542 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0917 00:50:26.753139 1105542 cache.go:87] Successfully saved all images to host disk.
	I0917 00:50:26.772809 1105542 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:50:26.772828 1105542 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:50:26.772844 1105542 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:50:26.772868 1105542 start.go:360] acquireMachinesLock for no-preload-305343: {Name:mk301cc5652bfe73a264aaf61a48b9167df412f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:50:26.772918 1105542 start.go:364] duration metric: took 33.839µs to acquireMachinesLock for "no-preload-305343"
	I0917 00:50:26.772939 1105542 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:50:26.772949 1105542 fix.go:54] fixHost starting: 
	I0917 00:50:26.773161 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:26.790526 1105542 fix.go:112] recreateIfNeeded on no-preload-305343: state=Stopped err=<nil>
	W0917 00:50:26.790551 1105542 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:50:23.500084 1099270 system_pods.go:86] 8 kube-system pods found
	I0917 00:50:23.500124 1099270 system_pods.go:89] "coredns-66bc5c9577-l6kf4" [3471ccde-7a2f-40db-9b89-0b0b1d99d708] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:50:23.500134 1099270 system_pods.go:89] "etcd-embed-certs-656365" [7019686f-c4e9-4229-96b9-b0b736f1ff1f] Running
	I0917 00:50:23.500142 1099270 system_pods.go:89] "kindnet-82pzc" [98599a74-dfca-4aaa-8c5f-f5b5fa514aea] Running
	I0917 00:50:23.500148 1099270 system_pods.go:89] "kube-apiserver-embed-certs-656365" [4474c0bb-d251-4a3f-9617-05519f6f36e1] Running
	I0917 00:50:23.500156 1099270 system_pods.go:89] "kube-controller-manager-embed-certs-656365" [9d8ea73e-c495-4fc0-8e50-d21f3bbd7bf7] Running
	I0917 00:50:23.500163 1099270 system_pods.go:89] "kube-proxy-h2lgd" [ec2eebd5-b4b3-41cf-af3b-efda8464fe22] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:50:23.500173 1099270 system_pods.go:89] "kube-scheduler-embed-certs-656365" [74679934-eb97-474d-99a2-28c356aa74b4] Running
	I0917 00:50:23.500181 1099270 system_pods.go:89] "storage-provisioner" [124939ce-5cb9-41fa-b555-2b807af05792] Running
	I0917 00:50:23.500205 1099270 retry.go:31] will retry after 4.483504149s: missing components: kube-dns
	I0917 00:50:28.271397 1094183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:50:28.306326 1094183 out.go:203] 
	W0917 00:50:28.307357 1094183 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:28.301888     533 remote_runtime.go:189] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	time="2025-09-17T00:50:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0917 00:50:28.307374 1094183 out.go:285] * 
	W0917 00:50:28.309097 1094183 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:50:28.310228 1094183 out.go:203] 
	I0917 00:50:26.792668 1105542 out.go:252] * Restarting existing docker container for "no-preload-305343" ...
	I0917 00:50:26.792734 1105542 cli_runner.go:164] Run: docker start no-preload-305343
	I0917 00:50:27.022497 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:27.039968 1105542 kic.go:430] container "no-preload-305343" state is running.
	I0917 00:50:27.040316 1105542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-305343
	I0917 00:50:27.058031 1105542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/config.json ...
	I0917 00:50:27.058242 1105542 machine.go:93] provisionDockerMachine start ...
	I0917 00:50:27.058314 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:27.076745 1105542 main.go:141] libmachine: Using SSH client type: native
	I0917 00:50:27.076991 1105542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33894 <nil> <nil>}
	I0917 00:50:27.077008 1105542 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:50:27.077565 1105542 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38760->127.0.0.1:33894: read: connection reset by peer
	I0917 00:50:30.213495 1105542 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-305343
	
	I0917 00:50:30.213525 1105542 ubuntu.go:182] provisioning hostname "no-preload-305343"
	I0917 00:50:30.213589 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:30.233747 1105542 main.go:141] libmachine: Using SSH client type: native
	I0917 00:50:30.234073 1105542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33894 <nil> <nil>}
	I0917 00:50:30.234091 1105542 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-305343 && echo "no-preload-305343" | sudo tee /etc/hostname
	I0917 00:50:30.382760 1105542 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-305343
	
	I0917 00:50:30.382841 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:30.402051 1105542 main.go:141] libmachine: Using SSH client type: native
	I0917 00:50:30.402349 1105542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33894 <nil> <nil>}
	I0917 00:50:30.402383 1105542 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-305343' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-305343/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-305343' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:50:30.537908 1105542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:50:30.537940 1105542 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-749120/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-749120/.minikube}
	I0917 00:50:30.537984 1105542 ubuntu.go:190] setting up certificates
	I0917 00:50:30.537995 1105542 provision.go:84] configureAuth start
	I0917 00:50:30.538058 1105542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-305343
	I0917 00:50:30.556184 1105542 provision.go:143] copyHostCerts
	I0917 00:50:30.556260 1105542 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem, removing ...
	I0917 00:50:30.556285 1105542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem
	I0917 00:50:30.556352 1105542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/ca.pem (1078 bytes)
	I0917 00:50:30.556484 1105542 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem, removing ...
	I0917 00:50:30.556494 1105542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem
	I0917 00:50:30.556525 1105542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/cert.pem (1123 bytes)
	I0917 00:50:30.556597 1105542 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem, removing ...
	I0917 00:50:30.556604 1105542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem
	I0917 00:50:30.556630 1105542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-749120/.minikube/key.pem (1675 bytes)
	I0917 00:50:30.556699 1105542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem org=jenkins.no-preload-305343 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-305343]
	I0917 00:50:30.756241 1105542 provision.go:177] copyRemoteCerts
	I0917 00:50:30.756301 1105542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:30.756332 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:30.775252 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:30.871607 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:50:30.897809 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0917 00:50:30.924573 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:50:30.953460 1105542 provision.go:87] duration metric: took 415.445888ms to configureAuth
	I0917 00:50:30.953499 1105542 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:50:30.953714 1105542 config.go:182] Loaded profile config "no-preload-305343": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:50:30.953727 1105542 machine.go:96] duration metric: took 3.895472569s to provisionDockerMachine
	I0917 00:50:30.953736 1105542 start.go:293] postStartSetup for "no-preload-305343" (driver="docker")
	I0917 00:50:30.953749 1105542 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:50:30.953811 1105542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:50:30.953864 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:30.972069 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:31.071933 1105542 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:50:31.075759 1105542 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:50:31.075799 1105542 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:50:31.075813 1105542 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:50:31.075825 1105542 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:50:31.075837 1105542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/addons for local assets ...
	I0917 00:50:31.075902 1105542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-749120/.minikube/files for local assets ...
	I0917 00:50:31.075982 1105542 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem -> 7527072.pem in /etc/ssl/certs
	I0917 00:50:31.076070 1105542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:50:31.085212 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:50:31.111110 1105542 start.go:296] duration metric: took 157.358327ms for postStartSetup
	I0917 00:50:31.111187 1105542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:50:31.111241 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:31.129720 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:31.222275 1105542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:50:31.226586 1105542 fix.go:56] duration metric: took 4.453629943s for fixHost
	I0917 00:50:31.226610 1105542 start.go:83] releasing machines lock for "no-preload-305343", held for 4.453677502s
	I0917 00:50:31.226680 1105542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-305343
	I0917 00:50:31.245832 1105542 ssh_runner.go:195] Run: cat /version.json
	I0917 00:50:31.245873 1105542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:50:31.245882 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:31.245943 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:31.266309 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:31.266864 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:31.456251 1105542 ssh_runner.go:195] Run: systemctl --version
	I0917 00:50:31.461506 1105542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:50:31.466079 1105542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:50:31.486672 1105542 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:50:31.486747 1105542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:50:31.495929 1105542 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:50:31.495956 1105542 start.go:495] detecting cgroup driver to use...
	I0917 00:50:31.495995 1105542 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:50:31.496042 1105542 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 00:50:31.509362 1105542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:50:31.520971 1105542 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:50:31.521016 1105542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:50:31.533687 1105542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:50:31.545166 1105542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:50:27.988100 1099270 system_pods.go:86] 8 kube-system pods found
	I0917 00:50:27.988133 1099270 system_pods.go:89] "coredns-66bc5c9577-l6kf4" [3471ccde-7a2f-40db-9b89-0b0b1d99d708] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:50:27.988140 1099270 system_pods.go:89] "etcd-embed-certs-656365" [7019686f-c4e9-4229-96b9-b0b736f1ff1f] Running
	I0917 00:50:27.988149 1099270 system_pods.go:89] "kindnet-82pzc" [98599a74-dfca-4aaa-8c5f-f5b5fa514aea] Running
	I0917 00:50:27.988155 1099270 system_pods.go:89] "kube-apiserver-embed-certs-656365" [4474c0bb-d251-4a3f-9617-05519f6f36e1] Running
	I0917 00:50:27.988160 1099270 system_pods.go:89] "kube-controller-manager-embed-certs-656365" [9d8ea73e-c495-4fc0-8e50-d21f3bbd7bf7] Running
	I0917 00:50:27.988165 1099270 system_pods.go:89] "kube-proxy-h2lgd" [ec2eebd5-b4b3-41cf-af3b-efda8464fe22] Running
	I0917 00:50:27.988170 1099270 system_pods.go:89] "kube-scheduler-embed-certs-656365" [74679934-eb97-474d-99a2-28c356aa74b4] Running
	I0917 00:50:27.988174 1099270 system_pods.go:89] "storage-provisioner" [124939ce-5cb9-41fa-b555-2b807af05792] Running
	I0917 00:50:27.988200 1099270 retry.go:31] will retry after 4.946737764s: missing components: kube-dns
	I0917 00:50:31.609866 1105542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:50:31.685656 1105542 docker.go:234] disabling docker service ...
	I0917 00:50:31.685721 1105542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:50:31.700024 1105542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:50:31.711577 1105542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:50:31.779451 1105542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:50:31.848784 1105542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:50:31.862220 1105542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:50:31.881443 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:50:31.891493 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:50:31.901329 1105542 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:50:31.901379 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:50:31.911827 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:50:31.922603 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:50:31.932809 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:50:31.942487 1105542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:50:31.952360 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:50:31.962527 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:50:31.972596 1105542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:50:31.983099 1105542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:50:31.991788 1105542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:50:32.000521 1105542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:50:32.069475 1105542 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:50:32.171320 1105542 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 00:50:32.171376 1105542 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 00:50:32.176131 1105542 start.go:563] Will wait 60s for crictl version
	I0917 00:50:32.176181 1105542 ssh_runner.go:195] Run: which crictl
	I0917 00:50:32.179964 1105542 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:50:32.216471 1105542 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0917 00:50:32.216551 1105542 ssh_runner.go:195] Run: containerd --version
	I0917 00:50:32.242200 1105542 ssh_runner.go:195] Run: containerd --version
	I0917 00:50:32.268158 1105542 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0917 00:50:32.269243 1105542 cli_runner.go:164] Run: docker network inspect no-preload-305343 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:50:32.286307 1105542 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0917 00:50:32.290109 1105542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:50:32.301499 1105542 kubeadm.go:875] updating cluster {Name:no-preload-305343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:50:32.301638 1105542 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0917 00:50:32.301684 1105542 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:50:32.339717 1105542 containerd.go:627] all images are preloaded for containerd runtime.
	I0917 00:50:32.339740 1105542 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:50:32.339749 1105542 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.34.0 containerd true true} ...
	I0917 00:50:32.339869 1105542 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-305343 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:50:32.339944 1105542 ssh_runner.go:195] Run: sudo crictl info
	I0917 00:50:32.376814 1105542 cni.go:84] Creating CNI manager for ""
	I0917 00:50:32.376836 1105542 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0917 00:50:32.376849 1105542 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:50:32.376872 1105542 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-305343 NodeName:no-preload-305343 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:50:32.376990 1105542 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-305343"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:50:32.377060 1105542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:50:32.387103 1105542 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:50:32.387156 1105542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 00:50:32.397201 1105542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0917 00:50:32.415962 1105542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:50:32.434721 1105542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I0917 00:50:32.454136 1105542 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0917 00:50:32.457724 1105542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:50:32.469156 1105542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:50:32.536961 1105542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:50:32.560468 1105542 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343 for IP: 192.168.103.2
	I0917 00:50:32.560485 1105542 certs.go:194] generating shared ca certs ...
	I0917 00:50:32.560505 1105542 certs.go:226] acquiring lock for ca certs: {Name:mk87d179b4a631193bd9c86db8034ccf19400cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:50:32.560632 1105542 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key
	I0917 00:50:32.560670 1105542 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key
	I0917 00:50:32.560680 1105542 certs.go:256] generating profile certs ...
	I0917 00:50:32.560755 1105542 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/client.key
	I0917 00:50:32.560808 1105542 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/apiserver.key.ccbdf892
	I0917 00:50:32.560855 1105542 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/proxy-client.key
	I0917 00:50:32.560965 1105542 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem (1338 bytes)
	W0917 00:50:32.561003 1105542 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707_empty.pem, impossibly tiny 0 bytes
	I0917 00:50:32.561013 1105542 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:50:32.561039 1105542 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:50:32.561060 1105542 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:50:32.561084 1105542 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/certs/key.pem (1675 bytes)
	I0917 00:50:32.561121 1105542 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem (1708 bytes)
	I0917 00:50:32.561753 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:50:32.587079 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:50:32.614113 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:50:32.645555 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:50:32.676563 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 00:50:32.702407 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:50:32.727828 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:50:32.756314 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/no-preload-305343/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 00:50:32.784212 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/certs/752707.pem --> /usr/share/ca-certificates/752707.pem (1338 bytes)
	I0917 00:50:32.810459 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/ssl/certs/7527072.pem --> /usr/share/ca-certificates/7527072.pem (1708 bytes)
	I0917 00:50:32.834131 1105542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-749120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:50:32.857459 1105542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:50:32.874913 1105542 ssh_runner.go:195] Run: openssl version
	I0917 00:50:32.880315 1105542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:50:32.890355 1105542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:50:32.894019 1105542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:50:32.894073 1105542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:50:32.901292 1105542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:50:32.910006 1105542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752707.pem && ln -fs /usr/share/ca-certificates/752707.pem /etc/ssl/certs/752707.pem"
	I0917 00:50:32.920695 1105542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752707.pem
	I0917 00:50:32.925468 1105542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:54 /usr/share/ca-certificates/752707.pem
	I0917 00:50:32.925518 1105542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752707.pem
	I0917 00:50:32.933107 1105542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752707.pem /etc/ssl/certs/51391683.0"
	I0917 00:50:32.944342 1105542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7527072.pem && ln -fs /usr/share/ca-certificates/7527072.pem /etc/ssl/certs/7527072.pem"
	I0917 00:50:32.954775 1105542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7527072.pem
	I0917 00:50:32.958957 1105542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:54 /usr/share/ca-certificates/7527072.pem
	I0917 00:50:32.959013 1105542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7527072.pem
	I0917 00:50:32.966492 1105542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7527072.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:50:32.975584 1105542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:50:32.979076 1105542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:50:32.986429 1105542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:50:32.993046 1105542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:50:32.999821 1105542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:50:33.006600 1105542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:50:33.013633 1105542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:50:33.020609 1105542 kubeadm.go:392] StartCluster: {Name:no-preload-305343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-305343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:50:33.020692 1105542 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0917 00:50:33.020742 1105542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:50:33.057726 1105542 cri.go:89] found id: "be727acb2413cb62fed400065b77a2d8563f6d702ad011f84625f93046d14fa8"
	I0917 00:50:33.057751 1105542 cri.go:89] found id: "2b254e4e0a0913ce22ca281722e2f2a1770d019ec5cb32075a2454bb408e22b6"
	I0917 00:50:33.057756 1105542 cri.go:89] found id: "eadc8234c1f716492213f4ceca0728ba4a8c39da932123890849560c7b9720bb"
	I0917 00:50:33.057761 1105542 cri.go:89] found id: "91590efa3cb9a0c2d0bd6417fedf19627c4a6660552820f8106925860be386a9"
	I0917 00:50:33.057766 1105542 cri.go:89] found id: "ad80eddc5416b6ae6865995f75f38026168b4b99c810f8838d86bfec63fa39ae"
	I0917 00:50:33.057771 1105542 cri.go:89] found id: "4a0e86b6bd8d5d5d1cabb8dcdf6cfc8320af52b7e9169b57ed5e4e130d49dff2"
	I0917 00:50:33.057775 1105542 cri.go:89] found id: "c93b4c8d7d7ec1ea6230427e1f23c4d57729b33e12e06f8005808cd41870e3e6"
	I0917 00:50:33.057780 1105542 cri.go:89] found id: "f746107f720e032ae0dac8b9d00658f03da9bc2d049aa1899f0b5d09ce38aecc"
	I0917 00:50:33.057797 1105542 cri.go:89] found id: ""
	I0917 00:50:33.057846 1105542 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0917 00:50:33.072596 1105542 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T00:50:33Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0917 00:50:33.072659 1105542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:50:33.083493 1105542 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:50:33.083567 1105542 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:50:33.083620 1105542 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:50:33.097898 1105542 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:50:33.098968 1105542 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-305343" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:50:33.099672 1105542 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-749120/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-305343" cluster setting kubeconfig missing "no-preload-305343" context setting]
	I0917 00:50:33.100686 1105542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:50:33.102853 1105542 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:50:33.116499 1105542 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.103.2
	I0917 00:50:33.116539 1105542 kubeadm.go:593] duration metric: took 32.96128ms to restartPrimaryControlPlane
	I0917 00:50:33.116552 1105542 kubeadm.go:394] duration metric: took 95.951287ms to StartCluster
	I0917 00:50:33.116574 1105542 settings.go:142] acquiring lock: {Name:mk6c1a5bee23e141aad5180323c16c47ed580ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:50:33.116635 1105542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:50:33.118404 1105542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-749120/kubeconfig: {Name:mk937123a8fee18625833b0bd778c4556f6787be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:50:33.118866 1105542 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 00:50:33.119205 1105542 config.go:182] Loaded profile config "no-preload-305343": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:50:33.119256 1105542 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:50:33.119338 1105542 addons.go:69] Setting storage-provisioner=true in profile "no-preload-305343"
	I0917 00:50:33.119362 1105542 addons.go:238] Setting addon storage-provisioner=true in "no-preload-305343"
	W0917 00:50:33.119371 1105542 addons.go:247] addon storage-provisioner should already be in state true
	I0917 00:50:33.119396 1105542 host.go:66] Checking if "no-preload-305343" exists ...
	I0917 00:50:33.119755 1105542 addons.go:69] Setting default-storageclass=true in profile "no-preload-305343"
	I0917 00:50:33.119773 1105542 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-305343"
	I0917 00:50:33.120107 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:33.120165 1105542 addons.go:69] Setting metrics-server=true in profile "no-preload-305343"
	I0917 00:50:33.120185 1105542 addons.go:238] Setting addon metrics-server=true in "no-preload-305343"
	W0917 00:50:33.120193 1105542 addons.go:247] addon metrics-server should already be in state true
	I0917 00:50:33.120222 1105542 host.go:66] Checking if "no-preload-305343" exists ...
	I0917 00:50:33.121014 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:33.121115 1105542 addons.go:69] Setting dashboard=true in profile "no-preload-305343"
	I0917 00:50:33.121436 1105542 addons.go:238] Setting addon dashboard=true in "no-preload-305343"
	W0917 00:50:33.121452 1105542 addons.go:247] addon dashboard should already be in state true
	I0917 00:50:33.121490 1105542 host.go:66] Checking if "no-preload-305343" exists ...
	I0917 00:50:33.121902 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:33.122278 1105542 out.go:179] * Verifying Kubernetes components...
	I0917 00:50:33.122357 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:33.123546 1105542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:50:33.167058 1105542 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 00:50:33.168115 1105542 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 00:50:33.171091 1105542 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:50:33.171119 1105542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 00:50:33.171183 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:33.173949 1105542 addons.go:238] Setting addon default-storageclass=true in "no-preload-305343"
	W0917 00:50:33.174269 1105542 addons.go:247] addon default-storageclass should already be in state true
	I0917 00:50:33.174232 1105542 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 00:50:33.174392 1105542 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 00:50:33.174612 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:33.177380 1105542 host.go:66] Checking if "no-preload-305343" exists ...
	I0917 00:50:33.177760 1105542 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0917 00:50:33.177899 1105542 cli_runner.go:164] Run: docker container inspect no-preload-305343 --format={{.State.Status}}
	I0917 00:50:33.181837 1105542 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0917 00:50:33.182786 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0917 00:50:33.182820 1105542 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0917 00:50:33.182875 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:33.201609 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:33.201953 1105542 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 00:50:33.203128 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:33.203534 1105542 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 00:50:33.203626 1105542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-305343
	I0917 00:50:33.208310 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:33.225830 1105542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/no-preload-305343/id_rsa Username:docker}
	I0917 00:50:33.262072 1105542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:50:33.292201 1105542 node_ready.go:35] waiting up to 6m0s for node "no-preload-305343" to be "Ready" ...
	I0917 00:50:33.323048 1105542 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 00:50:33.323076 1105542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 00:50:33.329152 1105542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:50:33.336925 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0917 00:50:33.336952 1105542 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0917 00:50:33.356568 1105542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 00:50:33.359137 1105542 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 00:50:33.359190 1105542 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 00:50:33.364438 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0917 00:50:33.364465 1105542 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0917 00:50:33.391853 1105542 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 00:50:33.391881 1105542 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 00:50:33.395911 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0917 00:50:33.395935 1105542 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0917 00:50:33.424313 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0917 00:50:33.424344 1105542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0917 00:50:33.424732 1105542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0917 00:50:33.427733 1105542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:50:33.427913 1105542 retry.go:31] will retry after 328.317114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0917 00:50:33.449613 1105542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:50:33.449658 1105542 retry.go:31] will retry after 289.134032ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:50:33.461946 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0917 00:50:33.461976 1105542 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0917 00:50:33.494149 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0917 00:50:33.494179 1105542 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0917 00:50:33.519674 1105542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:50:33.519720 1105542 retry.go:31] will retry after 266.352131ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:50:33.522886 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0917 00:50:33.522948 1105542 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0917 00:50:33.554013 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0917 00:50:33.554049 1105542 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0917 00:50:33.585811 1105542 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0917 00:50:33.585864 1105542 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0917 00:50:33.615550 1105542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0917 00:50:33.739006 1105542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0917 00:50:33.756455 1105542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:50:33.787125 1105542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 00:50:35.018063 1105542 node_ready.go:49] node "no-preload-305343" is "Ready"
	I0917 00:50:35.018105 1105542 node_ready.go:38] duration metric: took 1.725867509s for node "no-preload-305343" to be "Ready" ...
	I0917 00:50:35.018134 1105542 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:50:35.018194 1105542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:50:35.485441 1105542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.869819068s)
	I0917 00:50:35.485470 1105542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.746425045s)
	I0917 00:50:35.486676 1105542 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-305343 addons enable metrics-server
	
	I0917 00:50:35.585159 1105542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.828669148s)
	I0917 00:50:35.596571 1105542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.809405331s)
	I0917 00:50:35.596607 1105542 addons.go:479] Verifying addon metrics-server=true in "no-preload-305343"
	I0917 00:50:35.596649 1105542 api_server.go:72] duration metric: took 2.477742727s to wait for apiserver process to appear ...
	I0917 00:50:35.596672 1105542 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:50:35.596692 1105542 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:50:35.598000 1105542 out.go:179] * Enabled addons: dashboard, default-storageclass, storage-provisioner, metrics-server
	I0917 00:50:35.598959 1105542 addons.go:514] duration metric: took 2.479706242s for enable addons: enabled=[dashboard default-storageclass storage-provisioner metrics-server]
	I0917 00:50:35.600549 1105542 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:50:35.600575 1105542 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:50:36.096781 1105542 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:50:36.102339 1105542 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:50:36.102365 1105542 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:50:36.596805 1105542 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:50:32.939849 1099270 system_pods.go:86] 8 kube-system pods found
	I0917 00:50:32.939879 1099270 system_pods.go:89] "coredns-66bc5c9577-l6kf4" [3471ccde-7a2f-40db-9b89-0b0b1d99d708] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:50:32.939885 1099270 system_pods.go:89] "etcd-embed-certs-656365" [7019686f-c4e9-4229-96b9-b0b736f1ff1f] Running
	I0917 00:50:32.939891 1099270 system_pods.go:89] "kindnet-82pzc" [98599a74-dfca-4aaa-8c5f-f5b5fa514aea] Running
	I0917 00:50:32.939896 1099270 system_pods.go:89] "kube-apiserver-embed-certs-656365" [4474c0bb-d251-4a3f-9617-05519f6f36e1] Running
	I0917 00:50:32.939900 1099270 system_pods.go:89] "kube-controller-manager-embed-certs-656365" [9d8ea73e-c495-4fc0-8e50-d21f3bbd7bf7] Running
	I0917 00:50:32.939904 1099270 system_pods.go:89] "kube-proxy-h2lgd" [ec2eebd5-b4b3-41cf-af3b-efda8464fe22] Running
	I0917 00:50:32.939907 1099270 system_pods.go:89] "kube-scheduler-embed-certs-656365" [74679934-eb97-474d-99a2-28c356aa74b4] Running
	I0917 00:50:32.939911 1099270 system_pods.go:89] "storage-provisioner" [124939ce-5cb9-41fa-b555-2b807af05792] Running
	I0917 00:50:32.939925 1099270 retry.go:31] will retry after 7.48327625s: missing components: kube-dns
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:39.168837    1814 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:39Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> containerd <==
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817851916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817861430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817871180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817881996Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817897602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817915140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.817935663Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818007576Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818024724Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818033043Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818056520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818070483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818081010Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818090259Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818098205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818326501Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRu
ntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.9 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissi
ngHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:true EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818370040Z" level=info msg="Connect containerd service"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818405590Z" level=info msg="using legacy CRI server"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818449224Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818553850Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.818992386Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="failed to create CRI service: failed to create cni conf monitor for default: failed to create fsnotify watcher: too many open files"
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819183751Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819229690Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Sep 17 00:49:32 old-k8s-version-099552 containerd[469]: time="2025-09-17T00:49:32.819274263Z" level=info msg="containerd successfully booted in 0.026038s"
	Sep 17 00:49:32 old-k8s-version-099552 systemd[1]: Started containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.397971] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.103886] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.397468] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.988018] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.115808] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396948] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.104485] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396293] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.105124] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.396148] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +1.500649] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.569526] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 5f ef 1c 19 8f 08 06
	[ +14.523051] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 52 1e b8 75 d2 08 06
	[  +0.000432] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ba 5e 90 6a a2 f3 08 06
	[  +7.560975] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a d4 fe 64 89 0f 08 06
	[  +0.000660] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 50 38 7f fb 72 08 06
	[Sep17 00:48] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea ab 9c df 51 6c 08 06
	[  +0.000561] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 5f ef 1c 19 8f 08 06
	
	
	==> kernel <==
	 00:50:39 up  3:33,  0 users,  load average: 2.77, 2.87, 2.17
	Linux old-k8s-version-099552 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:38.842494 1111134 logs.go:279] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:38.839897    1705 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:38Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:38.877300 1111134 logs.go:279] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:38.874571    1717 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:38Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:38.913278 1111134 logs.go:279] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:38.910572    1729 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:38Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:38.948136 1111134 logs.go:279] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:38.945369    1741 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:38Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:38.982439 1111134 logs.go:279] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:38.979779    1753 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:38Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:39.021149 1111134 logs.go:279] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:39.017572    1764 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:39Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:39.056592 1111134 logs.go:279] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:39.053941    1776 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:39Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:39.092371 1111134 logs.go:279] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:39.089727    1787 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:39Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0917 00:50:39.127849 1111134 logs.go:279] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	E0917 00:50:39.125028    1799 remote_runtime.go:557] "ListContainers with filter from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2025-09-17T00:50:39Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"

                                                
                                                
** /stderr **
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-099552 -n old-k8s-version-099552
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-099552 -n old-k8s-version-099552: exit status 6 (301.066224ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:50:39.661379 1111396 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-099552" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "old-k8s-version-099552" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.33s)

                                                
                                    

Test pass (289/329)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 13.01
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.2
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 12.02
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.2
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.15
21 TestBinaryMirror 0.78
22 TestOffline 50.46
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 153.25
29 TestAddons/serial/Volcano 39.59
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 9.46
35 TestAddons/parallel/Registry 15.66
36 TestAddons/parallel/RegistryCreds 0.71
37 TestAddons/parallel/Ingress 19.03
38 TestAddons/parallel/InspektorGadget 6.3
39 TestAddons/parallel/MetricsServer 5.64
41 TestAddons/parallel/CSI 44.38
42 TestAddons/parallel/Headlamp 17.43
43 TestAddons/parallel/CloudSpanner 5.5
44 TestAddons/parallel/LocalPath 56.6
45 TestAddons/parallel/NvidiaDevicePlugin 6.53
46 TestAddons/parallel/Yakd 10.67
47 TestAddons/parallel/AmdGpuDevicePlugin 6.47
48 TestAddons/StoppedEnableDisable 12.51
49 TestCertOptions 29.31
50 TestCertExpiration 211.66
52 TestForceSystemdFlag 27.46
53 TestForceSystemdEnv 33.01
54 TestDockerEnvContainerd 35.75
55 TestKVMDriverInstallOrUpdate 1.98
59 TestErrorSpam/setup 19.75
60 TestErrorSpam/start 0.58
61 TestErrorSpam/status 0.88
62 TestErrorSpam/pause 1.46
63 TestErrorSpam/unpause 1.53
64 TestErrorSpam/stop 1.89
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 40.27
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 5.83
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.75
76 TestFunctional/serial/CacheCmd/cache/add_local 1.87
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.52
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 45.94
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.36
87 TestFunctional/serial/LogsFileCmd 1.36
88 TestFunctional/serial/InvalidService 3.94
90 TestFunctional/parallel/ConfigCmd 0.34
91 TestFunctional/parallel/DashboardCmd 8.11
92 TestFunctional/parallel/DryRun 0.35
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.95
98 TestFunctional/parallel/ServiceCmdConnect 15.51
99 TestFunctional/parallel/AddonsCmd 0.12
100 TestFunctional/parallel/PersistentVolumeClaim 32.52
102 TestFunctional/parallel/SSHCmd 0.58
103 TestFunctional/parallel/CpCmd 1.73
104 TestFunctional/parallel/MySQL 20.18
105 TestFunctional/parallel/FileSync 0.28
106 TestFunctional/parallel/CertSync 1.77
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
114 TestFunctional/parallel/License 0.4
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.52
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
120 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
121 TestFunctional/parallel/ProfileCmd/profile_list 0.42
122 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 18.19
128 TestFunctional/parallel/ServiceCmd/DeployApp 8.16
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
139 TestFunctional/parallel/ImageCommands/ImageBuild 3.82
140 TestFunctional/parallel/ImageCommands/Setup 1.79
141 TestFunctional/parallel/MountCmd/any-port 7.59
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.04
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.89
145 TestFunctional/parallel/ServiceCmd/List 0.91
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
148 TestFunctional/parallel/ServiceCmd/JSONOutput 0.91
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.55
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
152 TestFunctional/parallel/ServiceCmd/Format 0.54
153 TestFunctional/parallel/MountCmd/specific-port 2.03
154 TestFunctional/parallel/ServiceCmd/URL 0.56
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.59
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 127.51
167 TestMultiControlPlane/serial/NodeLabels 0.06
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.69
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.54
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.75
177 TestMultiControlPlane/serial/StopCluster 24.1
182 TestJSONOutput/start/Command 40.97
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.68
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.62
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 5.67
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.19
207 TestKicCustomNetwork/create_custom_network 35.59
208 TestKicCustomNetwork/use_default_bridge_network 23.54
209 TestKicExistingNetwork 24.31
210 TestKicCustomSubnet 25.62
211 TestKicStaticIP 24.24
212 TestMainNoArgs 0.05
213 TestMinikubeProfile 48.29
216 TestMountStart/serial/StartWithMountFirst 5.56
217 TestMountStart/serial/VerifyMountFirst 0.25
218 TestMountStart/serial/StartWithMountSecond 5.48
219 TestMountStart/serial/VerifyMountSecond 0.25
220 TestMountStart/serial/DeleteFirst 1.62
221 TestMountStart/serial/VerifyMountPostDelete 0.24
222 TestMountStart/serial/Stop 1.17
223 TestMountStart/serial/RestartStopped 7.48
224 TestMountStart/serial/VerifyMountPostStop 0.24
227 TestMultiNode/serial/FreshStart2Nodes 52.88
228 TestMultiNode/serial/DeployApp2Nodes 17.67
229 TestMultiNode/serial/PingHostFrom2Pods 0.78
230 TestMultiNode/serial/AddNode 12.39
231 TestMultiNode/serial/MultiNodeLabels 0.07
232 TestMultiNode/serial/ProfileList 0.65
233 TestMultiNode/serial/CopyFile 9.16
234 TestMultiNode/serial/StopNode 2.13
235 TestMultiNode/serial/StartAfterStop 7.03
236 TestMultiNode/serial/RestartKeepsNodes 68.4
237 TestMultiNode/serial/DeleteNode 5.02
238 TestMultiNode/serial/StopMultiNode 23.88
239 TestMultiNode/serial/RestartMultiNode 48.86
240 TestMultiNode/serial/ValidateNameConflict 21.92
245 TestPreload 110.27
247 TestScheduledStopUnix 95.67
250 TestInsufficientStorage 9.12
251 TestRunningBinaryUpgrade 45.42
253 TestKubernetesUpgrade 313.3
254 TestMissingContainerUpgrade 82.8
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
260 TestNoKubernetes/serial/StartWithK8s 25.75
265 TestNetworkPlugins/group/false 5.36
269 TestNoKubernetes/serial/StartWithStopK8s 18.48
270 TestNoKubernetes/serial/Start 7.08
271 TestStoppedBinaryUpgrade/Setup 2.62
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
273 TestNoKubernetes/serial/ProfileList 4.12
274 TestStoppedBinaryUpgrade/Upgrade 78.68
275 TestNoKubernetes/serial/Stop 1.55
276 TestNoKubernetes/serial/StartNoArgs 7.08
277 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
278 TestStoppedBinaryUpgrade/MinikubeLogs 1.21
287 TestPause/serial/Start 94.42
288 TestNetworkPlugins/group/auto/Start 118.65
289 TestPause/serial/SecondStartNoReconfiguration 5.82
290 TestNetworkPlugins/group/calico/Start 45.61
291 TestPause/serial/Pause 0.75
292 TestPause/serial/VerifyStatus 0.3
293 TestPause/serial/Unpause 0.61
294 TestPause/serial/PauseAgain 0.69
295 TestPause/serial/DeletePaused 2.63
296 TestPause/serial/VerifyDeletedResources 14.92
297 TestNetworkPlugins/group/custom-flannel/Start 137.66
298 TestNetworkPlugins/group/calico/ControllerPod 6.01
299 TestNetworkPlugins/group/calico/KubeletFlags 0.3
300 TestNetworkPlugins/group/calico/NetCatPod 9.22
301 TestNetworkPlugins/group/auto/KubeletFlags 0.45
302 TestNetworkPlugins/group/auto/NetCatPod 9.23
303 TestNetworkPlugins/group/calico/DNS 0.13
304 TestNetworkPlugins/group/calico/Localhost 0.11
305 TestNetworkPlugins/group/calico/HairPin 0.11
306 TestNetworkPlugins/group/auto/DNS 0.13
307 TestNetworkPlugins/group/auto/Localhost 0.12
308 TestNetworkPlugins/group/auto/HairPin 0.11
309 TestNetworkPlugins/group/kindnet/Start 118.3
310 TestNetworkPlugins/group/flannel/Start 198.64
311 TestNetworkPlugins/group/enable-default-cni/Start 128.58
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.23
314 TestNetworkPlugins/group/custom-flannel/DNS 0.12
315 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
316 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
317 TestNetworkPlugins/group/bridge/Start 69.16
318 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
319 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
320 TestNetworkPlugins/group/kindnet/NetCatPod 8.23
321 TestNetworkPlugins/group/kindnet/DNS 0.13
322 TestNetworkPlugins/group/kindnet/Localhost 0.1
323 TestNetworkPlugins/group/kindnet/HairPin 0.11
325 TestStartStop/group/old-k8s-version/serial/FirstStart 114.95
326 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
327 TestNetworkPlugins/group/bridge/NetCatPod 8.19
328 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
329 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.17
330 TestNetworkPlugins/group/bridge/DNS 0.14
331 TestNetworkPlugins/group/bridge/Localhost 0.13
332 TestNetworkPlugins/group/bridge/HairPin 0.15
333 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
334 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
335 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
336 TestNetworkPlugins/group/flannel/ControllerPod 6.01
337 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
338 TestNetworkPlugins/group/flannel/NetCatPod 9.19
340 TestStartStop/group/no-preload/serial/FirstStart 121.4
341 TestNetworkPlugins/group/flannel/DNS 0.15
342 TestNetworkPlugins/group/flannel/Localhost 0.15
343 TestNetworkPlugins/group/flannel/HairPin 0.13
345 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.92
347 TestStartStop/group/newest-cni/serial/FirstStart 45.41
348 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.25
349 TestStartStop/group/old-k8s-version/serial/DeployApp 9.26
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.81
351 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.08
352 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.88
353 TestStartStop/group/old-k8s-version/serial/Stop 12.04
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
355 TestStartStop/group/newest-cni/serial/DeployApp 0
356 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.85
357 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.07
358 TestStartStop/group/newest-cni/serial/Stop 1.25
359 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
360 TestStartStop/group/newest-cni/serial/SecondStart 15.45
361 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
363 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
364 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
365 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
366 TestStartStop/group/newest-cni/serial/Pause 2.77
368 TestStartStop/group/embed-certs/serial/FirstStart 80.82
369 TestStartStop/group/no-preload/serial/DeployApp 8.24
370 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
371 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.8
372 TestStartStop/group/no-preload/serial/Stop 11.98
373 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
374 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
375 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.69
376 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
377 TestStartStop/group/no-preload/serial/SecondStart 44.52
382 TestStartStop/group/embed-certs/serial/DeployApp 9.23
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
384 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.81
385 TestStartStop/group/embed-certs/serial/Stop 11.93
386 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
387 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
388 TestStartStop/group/no-preload/serial/Pause 2.78
389 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
390 TestStartStop/group/embed-certs/serial/SecondStart 43.14
391 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
392 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
393 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
394 TestStartStop/group/embed-certs/serial/Pause 2.63
x
+
TestDownloadOnly/v1.28.0/json-events (13.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-187489 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-187489 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.007919376s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (13.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0916 23:47:48.882633  752707 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I0916 23:47:48.882736  752707 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-187489
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-187489: exit status 85 (57.967467ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-187489 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-187489 │ jenkins │ v1.37.0 │ 16 Sep 25 23:47 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:47:35
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:47:35.915077  752719 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:47:35.915183  752719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:47:35.915191  752719 out.go:374] Setting ErrFile to fd 2...
	I0916 23:47:35.915196  752719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:47:35.915388  752719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	W0916 23:47:35.915528  752719 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21550-749120/.minikube/config/config.json: open /home/jenkins/minikube-integration/21550-749120/.minikube/config/config.json: no such file or directory
	I0916 23:47:35.915964  752719 out.go:368] Setting JSON to true
	I0916 23:47:35.916857  752719 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8998,"bootTime":1758057458,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:47:35.916968  752719 start.go:140] virtualization: kvm guest
	I0916 23:47:35.918838  752719 out.go:99] [download-only-187489] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0916 23:47:35.918962  752719 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 23:47:35.918971  752719 notify.go:220] Checking for updates...
	I0916 23:47:35.920043  752719 out.go:171] MINIKUBE_LOCATION=21550
	I0916 23:47:35.921149  752719 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:47:35.922576  752719 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0916 23:47:35.923534  752719 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0916 23:47:35.924563  752719 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 23:47:35.926378  752719 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 23:47:35.926658  752719 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:47:35.948720  752719 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:47:35.948785  752719 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:47:36.002846  752719 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-16 23:47:35.993330917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:47:36.002947  752719 docker.go:318] overlay module found
	I0916 23:47:36.004512  752719 out.go:99] Using the docker driver based on user configuration
	I0916 23:47:36.004548  752719 start.go:304] selected driver: docker
	I0916 23:47:36.004568  752719 start.go:918] validating driver "docker" against <nil>
	I0916 23:47:36.004659  752719 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:47:36.058477  752719 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-16 23:47:36.049004412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:47:36.058679  752719 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:47:36.059153  752719 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0916 23:47:36.059300  752719 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 23:47:36.060950  752719 out.go:171] Using Docker driver with root privileges
	I0916 23:47:36.061867  752719 cni.go:84] Creating CNI manager for ""
	I0916 23:47:36.061951  752719 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 23:47:36.061964  752719 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:47:36.062033  752719 start.go:348] cluster config:
	{Name:download-only-187489 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-187489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:47:36.062977  752719 out.go:99] Starting "download-only-187489" primary control-plane node in "download-only-187489" cluster
	I0916 23:47:36.062995  752719 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:47:36.063830  752719 out.go:99] Pulling base image v0.0.48 ...
	I0916 23:47:36.063856  752719 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0916 23:47:36.063947  752719 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:47:36.079549  752719 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0916 23:47:36.079726  752719 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0916 23:47:36.079830  752719 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0916 23:47:36.420834  752719 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I0916 23:47:36.420867  752719 cache.go:58] Caching tarball of preloaded images
	I0916 23:47:36.421074  752719 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0916 23:47:36.422772  752719 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0916 23:47:36.422793  752719 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 ...
	I0916 23:47:36.521037  752719 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I0916 23:47:42.791168  752719 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	
	
	* The control-plane node download-only-187489 host does not exist
	  To start a cluster, run: "minikube start -p download-only-187489"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-187489
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (12.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-791848 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-791848 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.018410968s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (12.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0916 23:48:01.285621  752707 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
I0916 23:48:01.285667  752707 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-791848
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-791848: exit status 85 (58.531492ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-187489 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-187489 │ jenkins │ v1.37.0 │ 16 Sep 25 23:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 16 Sep 25 23:47 UTC │ 16 Sep 25 23:47 UTC │
	│ delete  │ -p download-only-187489                                                                                                                                                               │ download-only-187489 │ jenkins │ v1.37.0 │ 16 Sep 25 23:47 UTC │ 16 Sep 25 23:47 UTC │
	│ start   │ -o=json --download-only -p download-only-791848 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-791848 │ jenkins │ v1.37.0 │ 16 Sep 25 23:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:47:49
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:47:49.306695  753111 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:47:49.306966  753111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:47:49.306977  753111 out.go:374] Setting ErrFile to fd 2...
	I0916 23:47:49.306981  753111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:47:49.307201  753111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0916 23:47:49.307722  753111 out.go:368] Setting JSON to true
	I0916 23:47:49.308630  753111 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9011,"bootTime":1758057458,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:47:49.308713  753111 start.go:140] virtualization: kvm guest
	I0916 23:47:49.310390  753111 out.go:99] [download-only-791848] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:47:49.310541  753111 notify.go:220] Checking for updates...
	I0916 23:47:49.311644  753111 out.go:171] MINIKUBE_LOCATION=21550
	I0916 23:47:49.312845  753111 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:47:49.313909  753111 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0916 23:47:49.314845  753111 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0916 23:47:49.315828  753111 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 23:47:49.317672  753111 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 23:47:49.317928  753111 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:47:49.339445  753111 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:47:49.339497  753111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:47:49.392550  753111 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-16 23:47:49.382940179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:47:49.392665  753111 docker.go:318] overlay module found
	I0916 23:47:49.394158  753111 out.go:99] Using the docker driver based on user configuration
	I0916 23:47:49.394194  753111 start.go:304] selected driver: docker
	I0916 23:47:49.394206  753111 start.go:918] validating driver "docker" against <nil>
	I0916 23:47:49.394298  753111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:47:49.443210  753111 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-16 23:47:49.434547852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:47:49.443400  753111 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:47:49.443866  753111 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0916 23:47:49.444032  753111 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 23:47:49.445638  753111 out.go:171] Using Docker driver with root privileges
	I0916 23:47:49.446739  753111 cni.go:84] Creating CNI manager for ""
	I0916 23:47:49.446806  753111 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 23:47:49.446818  753111 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:47:49.446891  753111 start.go:348] cluster config:
	{Name:download-only-791848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-791848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:47:49.447945  753111 out.go:99] Starting "download-only-791848" primary control-plane node in "download-only-791848" cluster
	I0916 23:47:49.447960  753111 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0916 23:47:49.448929  753111 out.go:99] Pulling base image v0.0.48 ...
	I0916 23:47:49.448953  753111 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:47:49.449055  753111 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:47:49.464634  753111 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0916 23:47:49.464754  753111 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0916 23:47:49.464771  753111 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0916 23:47:49.464777  753111 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0916 23:47:49.464784  753111 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0916 23:47:49.811808  753111 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0916 23:47:49.811859  753111 cache.go:58] Caching tarball of preloaded images
	I0916 23:47:49.812047  753111 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0916 23:47:49.813390  753111 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0916 23:47:49.813403  753111 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 ...
	I0916 23:47:49.920681  753111 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2b7b36e7513c2e517ecf49b6f3ce02cf -> /home/jenkins/minikube-integration/21550-749120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-791848 host does not exist
	  To start a cluster, run: "minikube start -p download-only-791848"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-791848
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.15s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-360516 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-360516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-360516
--- PASS: TestDownloadOnlyKic (1.15s)

                                                
                                    
x
+
TestBinaryMirror (0.78s)

                                                
                                                
=== RUN   TestBinaryMirror
I0916 23:48:03.078481  752707 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-970476 --alsologtostderr --binary-mirror http://127.0.0.1:41595 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-970476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-970476
--- PASS: TestBinaryMirror (0.78s)

                                                
                                    
x
+
TestOffline (50.46s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-300446 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-300446 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (47.863465884s)
helpers_test.go:175: Cleaning up "offline-containerd-300446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-300446
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-300446: (2.593594402s)
--- PASS: TestOffline (50.46s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-346612
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-346612: exit status 85 (49.575649ms)

                                                
                                                
-- stdout --
	* Profile "addons-346612" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-346612"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-346612
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-346612: exit status 85 (49.793152ms)

                                                
                                                
-- stdout --
	* Profile "addons-346612" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-346612"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (153.25s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-346612 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-346612 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m33.247156728s)
--- PASS: TestAddons/Setup (153.25s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.59s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 14.212569ms
addons_test.go:884: volcano-controller stabilized in 14.265676ms
addons_test.go:876: volcano-admission stabilized in 14.322751ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-bnfxr" [3d0cbb43-a575-4b4c-82e7-94ea8669d581] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003113854s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-mq4fn" [961b39c7-4583-4f06-ad9a-a3500fee5fd3] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003411766s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-4gpsb" [3c44eed5-2588-4529-9b70-9b053ccec90b] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002621527s
addons_test.go:903: (dbg) Run:  kubectl --context addons-346612 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-346612 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-346612 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [3bfa1e4c-8c7f-48b3-930e-102111cb9f04] Pending
helpers_test.go:352: "test-job-nginx-0" [3bfa1e4c-8c7f-48b3-930e-102111cb9f04] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [3bfa1e4c-8c7f-48b3-930e-102111cb9f04] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003415866s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-346612 addons disable volcano --alsologtostderr -v=1: (11.260204696s)
--- PASS: TestAddons/serial/Volcano (39.59s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-346612 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-346612 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-346612 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-346612 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ada0179e-50fd-4465-abb4-1e3ab62f4ac4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ada0179e-50fd-4465-abb4-1e3ab62f4ac4] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003589528s
addons_test.go:694: (dbg) Run:  kubectl --context addons-346612 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-346612 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-346612 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 2.575459ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-xhttb" [0403b80e-6646-4577-b293-fecfc5acfe7f] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002173542s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-4t84v" [0a781ca3-4fa5-4330-be14-1f005358dbd6] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003259908s
addons_test.go:392: (dbg) Run:  kubectl --context addons-346612 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-346612 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-346612 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.844871267s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 ip
2025/09/16 23:51:50 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.66s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.261764ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-346612
addons_test.go:332: (dbg) Run:  kubectl --context addons-346612 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-346612 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-346612 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-346612 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [d99bb91d-4c5a-426d-8aab-5a3881bf31fa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [d99bb91d-4c5a-426d-8aab-5a3881bf31fa] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003915437s
I0916 23:51:50.321929  752707 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-346612 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-346612 addons disable ingress --alsologtostderr -v=1: (7.8015089s)
--- PASS: TestAddons/parallel/Ingress (19.03s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-crdh4" [8120d500-ba45-4fe1-bd6b-47353c5871c5] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004108801s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.343845ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-4cddv" [556e84de-fccb-4e18-9470-ca781e19a9fb] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002572929s
addons_test.go:463: (dbg) Run:  kubectl --context addons-346612 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.64s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0916 23:51:42.247378  752707 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0916 23:51:42.250177  752707 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0916 23:51:42.250208  752707 kapi.go:107] duration metric: took 2.848875ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 2.869519ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-346612 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-346612 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [33a101ae-ff57-4961-968f-f8c69ea3604b] Pending
helpers_test.go:352: "task-pv-pod" [33a101ae-ff57-4961-968f-f8c69ea3604b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [33a101ae-ff57-4961-968f-f8c69ea3604b] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.002490056s
addons_test.go:572: (dbg) Run:  kubectl --context addons-346612 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-346612 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-346612 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-346612 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-346612 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-346612 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-346612 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [e5c726fb-e9dd-4eee-9c14-a68b1a6cd4ce] Pending
helpers_test.go:352: "task-pv-pod-restore" [e5c726fb-e9dd-4eee-9c14-a68b1a6cd4ce] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [e5c726fb-e9dd-4eee-9c14-a68b1a6cd4ce] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003708827s
addons_test.go:614: (dbg) Run:  kubectl --context addons-346612 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-346612 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-346612 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-346612 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.51026694s)
--- PASS: TestAddons/parallel/CSI (44.38s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-346612 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-tg2sq" [2a5afdc7-d480-4a9a-8ba4-66362555e8e3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-tg2sq" [2a5afdc7-d480-4a9a-8ba4-66362555e8e3] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003683768s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-346612 addons disable headlamp --alsologtostderr -v=1: (5.67283184s)
--- PASS: TestAddons/parallel/Headlamp (17.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-2wjwk" [4f3486dd-6a22-4ed2-9de4-452c7b374983] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003546258s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.6s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-346612 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-346612 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-346612 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [545631cb-7784-4746-8789-63eb6a7eed15] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [545631cb-7784-4746-8789-63eb6a7eed15] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [545631cb-7784-4746-8789-63eb6a7eed15] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.002425687s
addons_test.go:967: (dbg) Run:  kubectl --context addons-346612 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 ssh "cat /opt/local-path-provisioner/pvc-783153d1-23cd-45b5-8bbd-c80d99e13686_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-346612 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-346612 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-346612 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.712841724s)
--- PASS: TestAddons/parallel/LocalPath (56.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-kn6hp" [3e79e283-5fa3-4cfe-bd0c-f28d1f170c95] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004044777s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-xjs7s" [cfc9e99b-2386-47ef-92e6-1f3c1bc14283] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003095071s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-346612 addons disable yakd --alsologtostderr -v=1: (5.667399979s)
--- PASS: TestAddons/parallel/Yakd (10.67s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-4c45c" [21c30a17-cbee-4168-95da-f539b9c97686] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003179295s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-346612 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.51s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-346612
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-346612: (12.276816437s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-346612
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-346612
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-346612
--- PASS: TestAddons/StoppedEnableDisable (12.51s)

                                                
                                    
x
+
TestCertOptions (29.31s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-002126 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-002126 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (25.807213957s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-002126 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-002126 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-002126 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-002126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-002126
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-002126: (2.805325706s)
--- PASS: TestCertOptions (29.31s)

                                                
                                    
x
+
TestCertExpiration (211.66s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-351946 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-351946 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (23.453140345s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-351946 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-351946 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.856385448s)
helpers_test.go:175: Cleaning up "cert-expiration-351946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-351946
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-351946: (2.350483754s)
--- PASS: TestCertExpiration (211.66s)

                                                
                                    
x
+
TestForceSystemdFlag (27.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-975812 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-975812 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (25.250207694s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-975812 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-975812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-975812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-975812: (1.940185784s)
--- PASS: TestForceSystemdFlag (27.46s)

                                                
                                    
x
+
TestForceSystemdEnv (33.01s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-059670 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-059670 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (30.257031943s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-059670 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-059670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-059670
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-059670: (2.424898786s)
--- PASS: TestForceSystemdEnv (33.01s)

                                                
                                    
x
+
TestDockerEnvContainerd (35.75s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-325652 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-325652 --driver=docker  --container-runtime=containerd: (20.213466929s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-325652"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXM4d0i7/agent.778586" SSH_AGENT_PID="778587" DOCKER_HOST=ssh://docker@127.0.0.1:33529 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXM4d0i7/agent.778586" SSH_AGENT_PID="778587" DOCKER_HOST=ssh://docker@127.0.0.1:33529 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXM4d0i7/agent.778586" SSH_AGENT_PID="778587" DOCKER_HOST=ssh://docker@127.0.0.1:33529 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.774415004s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXM4d0i7/agent.778586" SSH_AGENT_PID="778587" DOCKER_HOST=ssh://docker@127.0.0.1:33529 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-325652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-325652
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-325652: (1.880271866s)
--- PASS: TestDockerEnvContainerd (35.75s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.98s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.98s)

                                                
                                    
x
+
TestErrorSpam/setup (19.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-130706 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-130706 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-130706 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-130706 --driver=docker  --container-runtime=containerd: (19.745408543s)
--- PASS: TestErrorSpam/setup (19.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130706 --log_dir /tmp/nospam-130706 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130706 --log_dir /tmp/nospam-130706 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130706 --log_dir /tmp/nospam-130706 start --dry-run
--- PASS: TestErrorSpam/start (0.58s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130706 --log_dir /tmp/nospam-130706 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130706 --log_dir /tmp/nospam-130706 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130706 --log_dir /tmp/nospam-130706 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130706 --log_dir /tmp/nospam-130706 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130706 --log_dir /tmp/nospam-130706 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130706 --log_dir /tmp/nospam-130706 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130706 --log_dir /tmp/nospam-130706 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130706 --log_dir /tmp/nospam-130706 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130706 --log_dir /tmp/nospam-130706 unpause
--- PASS: TestErrorSpam/unpause (1.53s)

                                                
                                    
x
+
TestErrorSpam/stop (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130706 --log_dir /tmp/nospam-130706 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-130706 --log_dir /tmp/nospam-130706 stop: (1.723209813s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130706 --log_dir /tmp/nospam-130706 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130706 --log_dir /tmp/nospam-130706 stop
--- PASS: TestErrorSpam/stop (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21550-749120/.minikube/files/etc/test/nested/copy/752707/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695580 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-695580 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (40.265998764s)
--- PASS: TestFunctional/serial/StartWithProxy (40.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0916 23:54:44.008589  752707 config.go:182] Loaded profile config "functional-695580": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695580 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-695580 --alsologtostderr -v=8: (5.824994136s)
functional_test.go:678: soft start took 5.82585804s for "functional-695580" cluster.
I0916 23:54:49.834117  752707 config.go:182] Loaded profile config "functional-695580": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (5.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-695580 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-695580 /tmp/TestFunctionalserialCacheCmdcacheadd_local3091679692/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 cache add minikube-local-cache-test:functional-695580
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-695580 cache add minikube-local-cache-test:functional-695580: (1.570617892s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 cache delete minikube-local-cache-test:functional-695580
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-695580
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695580 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (275.486119ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 kubectl -- --context functional-695580 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-695580 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.94s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695580 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0916 23:55:37.167133  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:37.173472  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:37.184770  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:37.206097  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:37.247401  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:37.328756  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:37.490196  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:37.811860  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:38.453876  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:39.735466  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:42.297628  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-695580 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.939236906s)
functional_test.go:776: restart took 45.939365921s for "functional-695580" cluster.
I0916 23:55:42.693380  752707 config.go:182] Loaded profile config "functional-695580": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (45.94s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-695580 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-695580 logs: (1.359236488s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 logs --file /tmp/TestFunctionalserialLogsFileCmd1387147730/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-695580 logs --file /tmp/TestFunctionalserialLogsFileCmd1387147730/001/logs.txt: (1.355691696s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-695580 apply -f testdata/invalidsvc.yaml
E0916 23:55:47.419103  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-695580
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-695580: exit status 115 (322.322419ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30140 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-695580 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695580 config get cpus: exit status 14 (58.290341ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695580 config get cpus: exit status 14 (50.393105ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-695580 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-695580 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 802103: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.11s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695580 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-695580 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (150.304702ms)

                                                
                                                
-- stdout --
	* [functional-695580] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 23:56:18.531193  800982 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:56:18.531294  800982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:18.531305  800982 out.go:374] Setting ErrFile to fd 2...
	I0916 23:56:18.531310  800982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:18.531579  800982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0916 23:56:18.532157  800982 out.go:368] Setting JSON to false
	I0916 23:56:18.533519  800982 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9520,"bootTime":1758057458,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:56:18.533637  800982 start.go:140] virtualization: kvm guest
	I0916 23:56:18.535509  800982 out.go:179] * [functional-695580] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:56:18.536583  800982 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:56:18.536587  800982 notify.go:220] Checking for updates...
	I0916 23:56:18.538649  800982 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:56:18.541551  800982 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0916 23:56:18.542560  800982 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0916 23:56:18.543542  800982 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:56:18.544580  800982 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:56:18.546146  800982 config.go:182] Loaded profile config "functional-695580": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:18.546824  800982 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:56:18.572147  800982 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:56:18.572220  800982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:18.626384  800982 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-16 23:56:18.616389972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:18.626509  800982 docker.go:318] overlay module found
	I0916 23:56:18.628036  800982 out.go:179] * Using the docker driver based on existing profile
	I0916 23:56:18.629051  800982 start.go:304] selected driver: docker
	I0916 23:56:18.629064  800982 start.go:918] validating driver "docker" against &{Name:functional-695580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-695580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:56:18.629155  800982 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:56:18.630674  800982 out.go:203] 
	W0916 23:56:18.631675  800982 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 23:56:18.632571  800982 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695580 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-695580 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-695580 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (151.828487ms)

                                                
                                                
-- stdout --
	* [functional-695580] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 23:56:10.545646  797742 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:56:10.545751  797742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:10.545762  797742 out.go:374] Setting ErrFile to fd 2...
	I0916 23:56:10.545769  797742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:10.546063  797742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0916 23:56:10.546511  797742 out.go:368] Setting JSON to false
	I0916 23:56:10.547629  797742 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9513,"bootTime":1758057458,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:56:10.547715  797742 start.go:140] virtualization: kvm guest
	I0916 23:56:10.549572  797742 out.go:179] * [functional-695580] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0916 23:56:10.550719  797742 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:56:10.550725  797742 notify.go:220] Checking for updates...
	I0916 23:56:10.553003  797742 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:56:10.553903  797742 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0916 23:56:10.554693  797742 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0916 23:56:10.555479  797742 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:56:10.559829  797742 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:56:10.561260  797742 config.go:182] Loaded profile config "functional-695580": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0916 23:56:10.561896  797742 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:56:10.584720  797742 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:56:10.584868  797742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:10.638913  797742 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:62 SystemTime:2025-09-16 23:56:10.630098703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:10.639023  797742 docker.go:318] overlay module found
	I0916 23:56:10.640989  797742 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0916 23:56:10.642019  797742 start.go:304] selected driver: docker
	I0916 23:56:10.642036  797742 start.go:918] validating driver "docker" against &{Name:functional-695580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-695580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:56:10.642151  797742 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:56:10.643720  797742 out.go:203] 
	W0916 23:56:10.644642  797742 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 23:56:10.645605  797742 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (15.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-695580 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-695580 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-hrwb8" [d616817f-94a5-4ef2-9689-58fb53e4ecc6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-hrwb8" [d616817f-94a5-4ef2-9689-58fb53e4ecc6] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.004002454s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30708
functional_test.go:1680: http://192.168.49.2:30708: success! body:
Request served by hello-node-connect-7d85dfc575-hrwb8

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30708
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (15.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c7baf146-2641-41ef-a682-21ec2366da81] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003383301s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-695580 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-695580 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-695580 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-695580 apply -f testdata/storage-provisioner/pod.yaml
I0916 23:55:56.918840  752707 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [905d6380-e88b-474b-9ac8-98500a8d148b] Pending
E0916 23:55:57.660455  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [905d6380-e88b-474b-9ac8-98500a8d148b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [905d6380-e88b-474b-9ac8-98500a8d148b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.052243734s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-695580 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-695580 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-695580 apply -f testdata/storage-provisioner/pod.yaml
I0916 23:56:15.914532  752707 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6edd0fe4-4962-4a10-b618-e77d737acf90] Pending
helpers_test.go:352: "sp-pod" [6edd0fe4-4962-4a10-b618-e77d737acf90] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [6edd0fe4-4962-4a10-b618-e77d737acf90] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004024755s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-695580 exec sp-pod -- ls /tmp/mount
2025/09/16 23:56:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.52s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh -n functional-695580 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 cp functional-695580:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4265289491/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh -n functional-695580 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh -n functional-695580 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-695580 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-4rklj" [78e62e4b-c541-45db-b6b7-df7e224ad1a4] Pending
helpers_test.go:352: "mysql-5bb876957f-4rklj" [78e62e4b-c541-45db-b6b7-df7e224ad1a4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-4rklj" [78e62e4b-c541-45db-b6b7-df7e224ad1a4] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.002160117s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-695580 exec mysql-5bb876957f-4rklj -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-695580 exec mysql-5bb876957f-4rklj -- mysql -ppassword -e "show databases;": exit status 1 (112.48495ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0916 23:56:06.073019  752707 retry.go:31] will retry after 1.481601002s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-695580 exec mysql-5bb876957f-4rklj -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-695580 exec mysql-5bb876957f-4rklj -- mysql -ppassword -e "show databases;": exit status 1 (129.2996ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0916 23:56:07.684520  752707 retry.go:31] will retry after 2.147322769s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-695580 exec mysql-5bb876957f-4rklj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.18s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/752707/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "sudo cat /etc/test/nested/copy/752707/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/752707.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "sudo cat /etc/ssl/certs/752707.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/752707.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "sudo cat /usr/share/ca-certificates/752707.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/7527072.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "sudo cat /etc/ssl/certs/7527072.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/7527072.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "sudo cat /usr/share/ca-certificates/7527072.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-695580 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695580 ssh "sudo systemctl is-active docker": exit status 1 (250.569869ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695580 ssh "sudo systemctl is-active crio": exit status 1 (249.822801ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "358.260845ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "61.646618ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "332.250627ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "49.126149ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-695580 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-695580 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-695580 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-695580 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 796098: os: process already finished
helpers_test.go:525: unable to kill pid 795749: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-695580 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-695580 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [6e7c0cca-8fef-4669-b4ef-6258641e58d8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [6e7c0cca-8fef-4669-b4ef-6258641e58d8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 18.003177827s
I0916 23:56:09.935334  752707 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-695580 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-695580 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-s7kkr" [f054bcfb-1974-4386-9e94-f0016747782c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-s7kkr" [f054bcfb-1974-4386-9e94-f0016747782c] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.012711111s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-695580 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.186.106 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-695580 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-695580 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-695580
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-695580
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-695580 image ls --format short --alsologtostderr:
I0916 23:56:19.822497  802003 out.go:360] Setting OutFile to fd 1 ...
I0916 23:56:19.822624  802003 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:19.822636  802003 out.go:374] Setting ErrFile to fd 2...
I0916 23:56:19.822642  802003 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:19.822923  802003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
I0916 23:56:19.823734  802003 config.go:182] Loaded profile config "functional-695580": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0916 23:56:19.823869  802003 config.go:182] Loaded profile config "functional-695580": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0916 23:56:19.824428  802003 cli_runner.go:164] Run: docker container inspect functional-695580 --format={{.State.Status}}
I0916 23:56:19.846138  802003 ssh_runner.go:195] Run: systemctl --version
I0916 23:56:19.846199  802003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-695580
I0916 23:56:19.867093  802003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/functional-695580/id_rsa Username:docker}
I0916 23:56:19.975915  802003 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-695580 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ docker.io/library/mysql                     │ 5.7                │ sha256:510733 │ 138MB  │
│ docker.io/library/nginx                     │ latest             │ sha256:41f689 │ 72.3MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.0            │ sha256:90550c │ 27.1MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.0            │ sha256:df0860 │ 26MB   │
│ registry.k8s.io/kube-scheduler              │ v1.34.0            │ sha256:46169d │ 17.4MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/kicbase/echo-server               │ functional-695580  │ sha256:9056ab │ 2.37MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0            │ sha256:a0af72 │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ docker.io/library/minikube-local-cache-test │ functional-695580  │ sha256:33468a │ 991B   │
│ docker.io/library/nginx                     │ alpine             │ sha256:4a8601 │ 22.5MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-695580 image ls --format table --alsologtostderr:
I0916 23:56:22.158184  803300 out.go:360] Setting OutFile to fd 1 ...
I0916 23:56:22.158492  803300 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:22.158504  803300 out.go:374] Setting ErrFile to fd 2...
I0916 23:56:22.158510  803300 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:22.158718  803300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
I0916 23:56:22.159353  803300 config.go:182] Loaded profile config "functional-695580": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0916 23:56:22.159489  803300 config.go:182] Loaded profile config "functional-695580": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0916 23:56:22.159880  803300 cli_runner.go:164] Run: docker container inspect functional-695580 --format={{.State.Status}}
I0916 23:56:22.179333  803300 ssh_runner.go:195] Run: systemctl --version
I0916 23:56:22.179390  803300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-695580
I0916 23:56:22.202597  803300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/functional-695580/id_rsa Username:docker}
I0916 23:56:22.303325  803300 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-695580 image ls --format json --alsologtostderr:
[{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"]
,"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"27066504"},{"id":"sha256:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"25963701"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-695580"],"size":"2372971"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c36
5b2d81","repoDigests":["docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e"],"repoTags":["docker.io/library/nginx:latest"],"size":"72319182"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"17385558"},{"id":"sha256:da86e6ba6ca197bf6bc
5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22477192"},{"id":"sha256:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"22819719"},{"id":"sha256:33468a62d9126381fdeac6f8367954a7f92fe03599296abdf99493e621d6b594","repoDigests":[],"
repoTags":["docker.io/library/minikube-local-cache-test:functional-695580"],"size":"991"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-695580 image ls --format json --alsologtostderr:
I0916 23:56:21.900710  803250 out.go:360] Setting OutFile to fd 1 ...
I0916 23:56:21.900948  803250 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:21.900956  803250 out.go:374] Setting ErrFile to fd 2...
I0916 23:56:21.900960  803250 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:21.901155  803250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
I0916 23:56:21.901738  803250 config.go:182] Loaded profile config "functional-695580": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0916 23:56:21.901833  803250 config.go:182] Loaded profile config "functional-695580": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0916 23:56:21.902224  803250 cli_runner.go:164] Run: docker container inspect functional-695580 --format={{.State.Status}}
I0916 23:56:21.921079  803250 ssh_runner.go:195] Run: systemctl --version
I0916 23:56:21.921137  803250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-695580
I0916 23:56:21.940745  803250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/functional-695580/id_rsa Username:docker}
I0916 23:56:22.040931  803250 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-695580 image ls --format yaml --alsologtostderr:
- id: sha256:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "17385558"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "27066504"
- id: sha256:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "25963701"
- id: sha256:33468a62d9126381fdeac6f8367954a7f92fe03599296abdf99493e621d6b594
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-695580
size: "991"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
repoTags:
- docker.io/library/nginx:alpine
size: "22477192"
- id: sha256:41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81
repoDigests:
- docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e
repoTags:
- docker.io/library/nginx:latest
size: "72319182"
- id: sha256:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "22819719"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-695580
size: "2372971"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-695580 image ls --format yaml --alsologtostderr:
I0916 23:56:20.079389  802143 out.go:360] Setting OutFile to fd 1 ...
I0916 23:56:20.079695  802143 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:20.079705  802143 out.go:374] Setting ErrFile to fd 2...
I0916 23:56:20.079711  802143 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:20.079933  802143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
I0916 23:56:20.080591  802143 config.go:182] Loaded profile config "functional-695580": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0916 23:56:20.080727  802143 config.go:182] Loaded profile config "functional-695580": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0916 23:56:20.081115  802143 cli_runner.go:164] Run: docker container inspect functional-695580 --format={{.State.Status}}
I0916 23:56:20.100784  802143 ssh_runner.go:195] Run: systemctl --version
I0916 23:56:20.100829  802143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-695580
I0916 23:56:20.118591  802143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/functional-695580/id_rsa Username:docker}
I0916 23:56:20.213727  802143 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695580 ssh pgrep buildkitd: exit status 1 (375.268582ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image build -t localhost/my-image:functional-695580 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-695580 image build -t localhost/my-image:functional-695580 testdata/build --alsologtostderr: (3.196642359s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-695580 image build -t localhost/my-image:functional-695580 testdata/build --alsologtostderr:
I0916 23:56:20.686833  802810 out.go:360] Setting OutFile to fd 1 ...
I0916 23:56:20.686944  802810 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:20.686953  802810 out.go:374] Setting ErrFile to fd 2...
I0916 23:56:20.686957  802810 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:20.687167  802810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
I0916 23:56:20.687787  802810 config.go:182] Loaded profile config "functional-695580": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0916 23:56:20.688375  802810 config.go:182] Loaded profile config "functional-695580": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0916 23:56:20.688802  802810 cli_runner.go:164] Run: docker container inspect functional-695580 --format={{.State.Status}}
I0916 23:56:20.706613  802810 ssh_runner.go:195] Run: systemctl --version
I0916 23:56:20.706657  802810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-695580
I0916 23:56:20.723589  802810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/functional-695580/id_rsa Username:docker}
I0916 23:56:20.816215  802810 build_images.go:161] Building image from path: /tmp/build.963909284.tar
I0916 23:56:20.816315  802810 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0916 23:56:20.825784  802810 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.963909284.tar
I0916 23:56:20.829381  802810 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.963909284.tar: stat -c "%s %y" /var/lib/minikube/build/build.963909284.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.963909284.tar': No such file or directory
I0916 23:56:20.829435  802810 ssh_runner.go:362] scp /tmp/build.963909284.tar --> /var/lib/minikube/build/build.963909284.tar (3072 bytes)
I0916 23:56:20.854824  802810 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.963909284
I0916 23:56:20.863699  802810 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.963909284 -xf /var/lib/minikube/build/build.963909284.tar
I0916 23:56:20.872875  802810 containerd.go:394] Building image: /var/lib/minikube/build/build.963909284
I0916 23:56:20.872937  802810 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.963909284 --local dockerfile=/var/lib/minikube/build/build.963909284 --output type=image,name=localhost/my-image:functional-695580
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:ce227570f7c1b36a192a30e188cb7d042d0bcb28353584d8e2e81b5c712139e5 done
#8 exporting config sha256:d4b8950d9c11345525dd167750da960a4bc3fb4057f0fec05bbd8a5473d12cd5 done
#8 naming to localhost/my-image:functional-695580 done
#8 DONE 0.1s
I0916 23:56:23.801581  802810 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.963909284 --local dockerfile=/var/lib/minikube/build/build.963909284 --output type=image,name=localhost/my-image:functional-695580: (2.928598205s)
I0916 23:56:23.801654  802810 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.963909284
I0916 23:56:23.813602  802810 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.963909284.tar
I0916 23:56:23.823183  802810 build_images.go:217] Built localhost/my-image:functional-695580 from /tmp/build.963909284.tar
I0916 23:56:23.823221  802810 build_images.go:133] succeeded building to: functional-695580
I0916 23:56:23.823228  802810 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.769090836s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-695580
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-695580 /tmp/TestFunctionalparallelMountCmdany-port998819449/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1758066970651444473" to /tmp/TestFunctionalparallelMountCmdany-port998819449/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1758066970651444473" to /tmp/TestFunctionalparallelMountCmdany-port998819449/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1758066970651444473" to /tmp/TestFunctionalparallelMountCmdany-port998819449/001/test-1758066970651444473
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695580 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (254.182238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0916 23:56:10.905974  752707 retry.go:31] will retry after 450.804694ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 16 23:56 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 16 23:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 16 23:56 test-1758066970651444473
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh cat /mount-9p/test-1758066970651444473
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-695580 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [4189076f-594a-4ff1-b826-d8e32f6f9bb8] Pending
helpers_test.go:352: "busybox-mount" [4189076f-594a-4ff1-b826-d8e32f6f9bb8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [4189076f-594a-4ff1-b826-d8e32f6f9bb8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [4189076f-594a-4ff1-b826-d8e32f6f9bb8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004339968s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-695580 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695580 /tmp/TestFunctionalparallelMountCmdany-port998819449/001:/mount-9p --alsologtostderr -v=1] ...
E0916 23:56:18.142358  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image load --daemon kicbase/echo-server:functional-695580 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image load --daemon kicbase/echo-server:functional-695580 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-695580
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image load --daemon kicbase/echo-server:functional-695580 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image save kicbase/echo-server:functional-695580 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image rm kicbase/echo-server:functional-695580 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 service list -o json
functional_test.go:1504: Took "911.16971ms" to run "out/minikube-linux-amd64 -p functional-695580 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-695580
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 image save --daemon kicbase/echo-server:functional-695580 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-695580
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30164
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-695580 /tmp/TestFunctionalparallelMountCmdspecific-port1681205866/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695580 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (294.883835ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0916 23:56:18.533111  752707 retry.go:31] will retry after 642.909911ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695580 /tmp/TestFunctionalparallelMountCmdspecific-port1681205866/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695580 ssh "sudo umount -f /mount-9p": exit status 1 (312.140905ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-695580 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695580 /tmp/TestFunctionalparallelMountCmdspecific-port1681205866/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30164
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-695580 /tmp/TestFunctionalparallelMountCmdVerifyCleanup864128360/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-695580 /tmp/TestFunctionalparallelMountCmdVerifyCleanup864128360/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-695580 /tmp/TestFunctionalparallelMountCmdVerifyCleanup864128360/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-695580 ssh "findmnt -T" /mount1: exit status 1 (370.010742ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0916 23:56:20.642002  752707 retry.go:31] will retry after 405.470963ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-695580 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-695580 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695580 /tmp/TestFunctionalparallelMountCmdVerifyCleanup864128360/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695580 /tmp/TestFunctionalparallelMountCmdVerifyCleanup864128360/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-695580 /tmp/TestFunctionalparallelMountCmdVerifyCleanup864128360/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-695580
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-695580
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-695580
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (127.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0916 23:56:59.104330  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:58:21.026667  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-472903 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m6.829837065s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (127.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-472903 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-472903 stop --alsologtostderr -v 5: (24.003841034s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-472903 status --alsologtostderr -v 5: exit status 7 (100.690862ms)

                                                
                                                
-- stdout --
	ha-472903
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-472903-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-472903-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:20:34.716911  853118 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:20:34.717048  853118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:20:34.717060  853118 out.go:374] Setting ErrFile to fd 2...
	I0917 00:20:34.717066  853118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:20:34.717288  853118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:20:34.717482  853118 out.go:368] Setting JSON to false
	I0917 00:20:34.717507  853118 mustload.go:65] Loading cluster: ha-472903
	I0917 00:20:34.717639  853118 notify.go:220] Checking for updates...
	I0917 00:20:34.717928  853118 config.go:182] Loaded profile config "ha-472903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:20:34.717954  853118 status.go:174] checking status of ha-472903 ...
	I0917 00:20:34.718374  853118 cli_runner.go:164] Run: docker container inspect ha-472903 --format={{.State.Status}}
	I0917 00:20:34.736705  853118 status.go:371] ha-472903 host status = "Stopped" (err=<nil>)
	I0917 00:20:34.736733  853118 status.go:384] host is not running, skipping remaining checks
	I0917 00:20:34.736738  853118 status.go:176] ha-472903 status: &{Name:ha-472903 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:20:34.736755  853118 status.go:174] checking status of ha-472903-m02 ...
	I0917 00:20:34.737037  853118 cli_runner.go:164] Run: docker container inspect ha-472903-m02 --format={{.State.Status}}
	I0917 00:20:34.754139  853118 status.go:371] ha-472903-m02 host status = "Stopped" (err=<nil>)
	I0917 00:20:34.754155  853118 status.go:384] host is not running, skipping remaining checks
	I0917 00:20:34.754174  853118 status.go:176] ha-472903-m02 status: &{Name:ha-472903-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:20:34.754193  853118 status.go:174] checking status of ha-472903-m04 ...
	I0917 00:20:34.754432  853118 cli_runner.go:164] Run: docker container inspect ha-472903-m04 --format={{.State.Status}}
	I0917 00:20:34.770861  853118 status.go:371] ha-472903-m04 host status = "Stopped" (err=<nil>)
	I0917 00:20:34.770893  853118 status.go:384] host is not running, skipping remaining checks
	I0917 00:20:34.770906  853118 status.go:176] ha-472903-m04 status: &{Name:ha-472903-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-954189 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-954189 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (40.964263959s)
--- PASS: TestJSONOutput/start/Command (40.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-954189 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-954189 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.67s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-954189 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-954189 --output=json --user=testUser: (5.671018988s)
--- PASS: TestJSONOutput/stop/Command (5.67s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-525431 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-525431 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (59.943015ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"65f90ef8-aeb8-4823-8f32-d42c3e5653e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-525431] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad32a24d-8a4b-468f-8667-517cf3342297","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21550"}}
	{"specversion":"1.0","id":"626fc525-fa38-44d5-9161-1f826d51ca8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"82af7e44-f469-4ed1-95e1-7a923598c7f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig"}}
	{"specversion":"1.0","id":"a9388529-093c-475b-a0bb-624119736bbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube"}}
	{"specversion":"1.0","id":"9ce8c338-da4b-485f-b2fe-1c9176c3fe3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"14423583-aba5-46a0-a4b4-3d2292b70b8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3c00d5ef-7791-44c5-b112-14053a679514","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-525431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-525431
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.59s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-853350 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-853350 --network=: (33.458410347s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-853350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-853350
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-853350: (2.105861566s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.59s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.54s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-761613 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-761613 --network=bridge: (21.592474438s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-761613" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-761613
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-761613: (1.927804118s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.54s)

                                                
                                    
x
+
TestKicExistingNetwork (24.31s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0917 00:28:32.825301  752707 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0917 00:28:32.842963  752707 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0917 00:28:32.843060  752707 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0917 00:28:32.843083  752707 cli_runner.go:164] Run: docker network inspect existing-network
W0917 00:28:32.860192  752707 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0917 00:28:32.860234  752707 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0917 00:28:32.860250  752707 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0917 00:28:32.860401  752707 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0917 00:28:32.877933  752707 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-22d49b2f397d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:31:75:1d:65:13} reservation:<nil>}
I0917 00:28:32.878336  752707 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001917bd0}
I0917 00:28:32.878368  752707 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0917 00:28:32.878441  752707 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0917 00:28:32.935972  752707 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-996679 --network=existing-network
E0917 00:28:40.236092  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-996679 --network=existing-network: (22.259589072s)
helpers_test.go:175: Cleaning up "existing-network-996679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-996679
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-996679: (1.907792144s)
I0917 00:28:57.121720  752707 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.31s)

                                                
                                    
x
+
TestKicCustomSubnet (25.62s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-306735 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-306735 --subnet=192.168.60.0/24: (23.542832556s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-306735 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-306735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-306735
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-306735: (2.059100454s)
--- PASS: TestKicCustomSubnet (25.62s)

                                                
                                    
x
+
TestKicStaticIP (24.24s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-929629 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-929629 --static-ip=192.168.200.200: (22.054736827s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-929629 ip
helpers_test.go:175: Cleaning up "static-ip-929629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-929629
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-929629: (2.055203099s)
--- PASS: TestKicStaticIP (24.24s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (48.29s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-677838 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-677838 --driver=docker  --container-runtime=containerd: (20.641047447s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-695497 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-695497 --driver=docker  --container-runtime=containerd: (21.942838073s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-677838
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-695497
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-695497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-695497
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-695497: (2.259345263s)
helpers_test.go:175: Cleaning up "first-677838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-677838
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-677838: (2.27744085s)
--- PASS: TestMinikubeProfile (48.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-704899 --memory=3072 --mount-string /tmp/TestMountStartserial1006147594/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0917 00:30:37.159873  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-704899 --memory=3072 --mount-string /tmp/TestMountStartserial1006147594/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.561951163s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-704899 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-723333 --memory=3072 --mount-string /tmp/TestMountStartserial1006147594/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-723333 --memory=3072 --mount-string /tmp/TestMountStartserial1006147594/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.477848045s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-723333 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-704899 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-704899 --alsologtostderr -v=5: (1.621788957s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-723333 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-723333
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-723333: (1.174261527s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.48s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-723333
E0917 00:30:49.958639  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-723333: (6.482027569s)
--- PASS: TestMountStart/serial/RestartStopped (7.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-723333 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (52.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-136409 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-136409 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.429743815s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (52.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136409 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136409 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-136409 -- rollout status deployment/busybox: (16.256061456s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136409 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136409 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136409 -- exec busybox-7b57f96db7-6scg7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136409 -- exec busybox-7b57f96db7-9qnz6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136409 -- exec busybox-7b57f96db7-6scg7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136409 -- exec busybox-7b57f96db7-9qnz6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136409 -- exec busybox-7b57f96db7-6scg7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136409 -- exec busybox-7b57f96db7-9qnz6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.67s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136409 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136409 -- exec busybox-7b57f96db7-6scg7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136409 -- exec busybox-7b57f96db7-6scg7 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136409 -- exec busybox-7b57f96db7-9qnz6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-136409 -- exec busybox-7b57f96db7-9qnz6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (12.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-136409 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-136409 -v=5 --alsologtostderr: (11.769443212s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (12.39s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-136409 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 cp testdata/cp-test.txt multinode-136409:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 cp multinode-136409:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile870268172/001/cp-test_multinode-136409.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 cp multinode-136409:/home/docker/cp-test.txt multinode-136409-m02:/home/docker/cp-test_multinode-136409_multinode-136409-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409-m02 "sudo cat /home/docker/cp-test_multinode-136409_multinode-136409-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 cp multinode-136409:/home/docker/cp-test.txt multinode-136409-m03:/home/docker/cp-test_multinode-136409_multinode-136409-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409-m03 "sudo cat /home/docker/cp-test_multinode-136409_multinode-136409-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 cp testdata/cp-test.txt multinode-136409-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 cp multinode-136409-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile870268172/001/cp-test_multinode-136409-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 cp multinode-136409-m02:/home/docker/cp-test.txt multinode-136409:/home/docker/cp-test_multinode-136409-m02_multinode-136409.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409 "sudo cat /home/docker/cp-test_multinode-136409-m02_multinode-136409.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 cp multinode-136409-m02:/home/docker/cp-test.txt multinode-136409-m03:/home/docker/cp-test_multinode-136409-m02_multinode-136409-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409-m03 "sudo cat /home/docker/cp-test_multinode-136409-m02_multinode-136409-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 cp testdata/cp-test.txt multinode-136409-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 cp multinode-136409-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile870268172/001/cp-test_multinode-136409-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 cp multinode-136409-m03:/home/docker/cp-test.txt multinode-136409:/home/docker/cp-test_multinode-136409-m03_multinode-136409.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409 "sudo cat /home/docker/cp-test_multinode-136409-m03_multinode-136409.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 cp multinode-136409-m03:/home/docker/cp-test.txt multinode-136409-m02:/home/docker/cp-test_multinode-136409-m03_multinode-136409-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 ssh -n multinode-136409-m02 "sudo cat /home/docker/cp-test_multinode-136409-m03_multinode-136409-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-136409 node stop m03: (1.213961998s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-136409 status: exit status 7 (455.929407ms)

                                                
                                                
-- stdout --
	multinode-136409
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-136409-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-136409-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-136409 status --alsologtostderr: exit status 7 (459.887976ms)

                                                
                                                
-- stdout --
	multinode-136409
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-136409-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-136409-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:32:34.718240  913902 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:32:34.718346  913902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:32:34.718354  913902 out.go:374] Setting ErrFile to fd 2...
	I0917 00:32:34.718357  913902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:32:34.718564  913902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:32:34.718732  913902 out.go:368] Setting JSON to false
	I0917 00:32:34.718752  913902 mustload.go:65] Loading cluster: multinode-136409
	I0917 00:32:34.718833  913902 notify.go:220] Checking for updates...
	I0917 00:32:34.719083  913902 config.go:182] Loaded profile config "multinode-136409": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:32:34.719111  913902 status.go:174] checking status of multinode-136409 ...
	I0917 00:32:34.719576  913902 cli_runner.go:164] Run: docker container inspect multinode-136409 --format={{.State.Status}}
	I0917 00:32:34.740385  913902 status.go:371] multinode-136409 host status = "Running" (err=<nil>)
	I0917 00:32:34.740450  913902 host.go:66] Checking if "multinode-136409" exists ...
	I0917 00:32:34.740728  913902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-136409
	I0917 00:32:34.758319  913902 host.go:66] Checking if "multinode-136409" exists ...
	I0917 00:32:34.758593  913902 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:32:34.758656  913902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-136409
	I0917 00:32:34.774334  913902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33669 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/multinode-136409/id_rsa Username:docker}
	I0917 00:32:34.865182  913902 ssh_runner.go:195] Run: systemctl --version
	I0917 00:32:34.869454  913902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:32:34.880904  913902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:32:34.932102  913902 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:32:34.923244894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:32:34.932773  913902 kubeconfig.go:125] found "multinode-136409" server: "https://192.168.67.2:8443"
	I0917 00:32:34.932809  913902 api_server.go:166] Checking apiserver status ...
	I0917 00:32:34.932861  913902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:32:34.944355  913902 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1451/cgroup
	W0917 00:32:34.953549  913902 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1451/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:32:34.953597  913902 ssh_runner.go:195] Run: ls
	I0917 00:32:34.957104  913902 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0917 00:32:34.961237  913902 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0917 00:32:34.961259  913902 status.go:463] multinode-136409 apiserver status = Running (err=<nil>)
	I0917 00:32:34.961271  913902 status.go:176] multinode-136409 status: &{Name:multinode-136409 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:32:34.961292  913902 status.go:174] checking status of multinode-136409-m02 ...
	I0917 00:32:34.961645  913902 cli_runner.go:164] Run: docker container inspect multinode-136409-m02 --format={{.State.Status}}
	I0917 00:32:34.978507  913902 status.go:371] multinode-136409-m02 host status = "Running" (err=<nil>)
	I0917 00:32:34.978537  913902 host.go:66] Checking if "multinode-136409-m02" exists ...
	I0917 00:32:34.978837  913902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-136409-m02
	I0917 00:32:34.995293  913902 host.go:66] Checking if "multinode-136409-m02" exists ...
	I0917 00:32:34.995557  913902 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:32:34.995603  913902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-136409-m02
	I0917 00:32:35.011989  913902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33674 SSHKeyPath:/home/jenkins/minikube-integration/21550-749120/.minikube/machines/multinode-136409-m02/id_rsa Username:docker}
	I0917 00:32:35.102988  913902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:32:35.114029  913902 status.go:176] multinode-136409-m02 status: &{Name:multinode-136409-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:32:35.114066  913902 status.go:174] checking status of multinode-136409-m03 ...
	I0917 00:32:35.114304  913902 cli_runner.go:164] Run: docker container inspect multinode-136409-m03 --format={{.State.Status}}
	I0917 00:32:35.131201  913902 status.go:371] multinode-136409-m03 host status = "Stopped" (err=<nil>)
	I0917 00:32:35.131220  913902 status.go:384] host is not running, skipping remaining checks
	I0917 00:32:35.131227  913902 status.go:176] multinode-136409-m03 status: &{Name:multinode-136409-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-136409 node start m03 -v=5 --alsologtostderr: (6.367642275s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (68.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-136409
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-136409
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-136409: (24.835336242s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-136409 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-136409 --wait=true -v=5 --alsologtostderr: (43.46962835s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-136409
--- PASS: TestMultiNode/serial/RestartKeepsNodes (68.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 node delete m03
E0917 00:33:53.030574  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-136409 node delete m03: (4.456124408s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-136409 stop: (23.714631319s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-136409 status: exit status 7 (80.984165ms)

                                                
                                                
-- stdout --
	multinode-136409
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-136409-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-136409 status --alsologtostderr: exit status 7 (82.55048ms)

                                                
                                                
-- stdout --
	multinode-136409
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-136409-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:34:19.429278  923481 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:34:19.429572  923481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:34:19.429580  923481 out.go:374] Setting ErrFile to fd 2...
	I0917 00:34:19.429584  923481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:34:19.429763  923481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:34:19.429923  923481 out.go:368] Setting JSON to false
	I0917 00:34:19.429942  923481 mustload.go:65] Loading cluster: multinode-136409
	I0917 00:34:19.430009  923481 notify.go:220] Checking for updates...
	I0917 00:34:19.430348  923481 config.go:182] Loaded profile config "multinode-136409": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:34:19.430383  923481 status.go:174] checking status of multinode-136409 ...
	I0917 00:34:19.430898  923481 cli_runner.go:164] Run: docker container inspect multinode-136409 --format={{.State.Status}}
	I0917 00:34:19.448373  923481 status.go:371] multinode-136409 host status = "Stopped" (err=<nil>)
	I0917 00:34:19.448398  923481 status.go:384] host is not running, skipping remaining checks
	I0917 00:34:19.448408  923481 status.go:176] multinode-136409 status: &{Name:multinode-136409 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:34:19.448479  923481 status.go:174] checking status of multinode-136409-m02 ...
	I0917 00:34:19.448734  923481 cli_runner.go:164] Run: docker container inspect multinode-136409-m02 --format={{.State.Status}}
	I0917 00:34:19.466561  923481 status.go:371] multinode-136409-m02 host status = "Stopped" (err=<nil>)
	I0917 00:34:19.466577  923481 status.go:384] host is not running, skipping remaining checks
	I0917 00:34:19.466583  923481 status.go:176] multinode-136409-m02 status: &{Name:multinode-136409-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-136409 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-136409 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (48.291671385s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-136409 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.86s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-136409
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-136409-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-136409-m02 --driver=docker  --container-runtime=containerd: exit status 14 (68.492621ms)

                                                
                                                
-- stdout --
	* [multinode-136409-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-136409-m02' is duplicated with machine name 'multinode-136409-m02' in profile 'multinode-136409'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-136409-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-136409-m03 --driver=docker  --container-runtime=containerd: (19.302088033s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-136409
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-136409: exit status 80 (273.582129ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-136409 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-136409-m03 already exists in multinode-136409-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-136409-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-136409-m03: (2.231172245s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (21.92s)

                                                
                                    
x
+
TestPreload (110.27s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-510134 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0917 00:35:37.159770  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:35:49.958721  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-510134 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (46.593944041s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-510134 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-510134 image pull gcr.io/k8s-minikube/busybox: (2.370015096s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-510134
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-510134: (5.604111432s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-510134 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-510134 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (53.12188728s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-510134 image list
helpers_test.go:175: Cleaning up "test-preload-510134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-510134
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-510134: (2.360060946s)
--- PASS: TestPreload (110.27s)

                                                
                                    
x
+
TestScheduledStopUnix (95.67s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-862039 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-862039 --memory=3072 --driver=docker  --container-runtime=containerd: (20.268822068s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-862039 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-862039 -n scheduled-stop-862039
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-862039 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0917 00:37:45.207242  752707 retry.go:31] will retry after 137.695µs: open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/scheduled-stop-862039/pid: no such file or directory
I0917 00:37:45.208433  752707 retry.go:31] will retry after 89.112µs: open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/scheduled-stop-862039/pid: no such file or directory
I0917 00:37:45.209554  752707 retry.go:31] will retry after 270.794µs: open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/scheduled-stop-862039/pid: no such file or directory
I0917 00:37:45.210814  752707 retry.go:31] will retry after 309.544µs: open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/scheduled-stop-862039/pid: no such file or directory
I0917 00:37:45.211957  752707 retry.go:31] will retry after 508.051µs: open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/scheduled-stop-862039/pid: no such file or directory
I0917 00:37:45.213057  752707 retry.go:31] will retry after 397.433µs: open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/scheduled-stop-862039/pid: no such file or directory
I0917 00:37:45.214210  752707 retry.go:31] will retry after 1.489054ms: open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/scheduled-stop-862039/pid: no such file or directory
I0917 00:37:45.216438  752707 retry.go:31] will retry after 1.67031ms: open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/scheduled-stop-862039/pid: no such file or directory
I0917 00:37:45.218632  752707 retry.go:31] will retry after 3.054498ms: open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/scheduled-stop-862039/pid: no such file or directory
I0917 00:37:45.221773  752707 retry.go:31] will retry after 5.240437ms: open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/scheduled-stop-862039/pid: no such file or directory
I0917 00:37:45.228272  752707 retry.go:31] will retry after 3.287596ms: open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/scheduled-stop-862039/pid: no such file or directory
I0917 00:37:45.232466  752707 retry.go:31] will retry after 10.114197ms: open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/scheduled-stop-862039/pid: no such file or directory
I0917 00:37:45.243680  752707 retry.go:31] will retry after 14.332055ms: open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/scheduled-stop-862039/pid: no such file or directory
I0917 00:37:45.258902  752707 retry.go:31] will retry after 21.250354ms: open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/scheduled-stop-862039/pid: no such file or directory
I0917 00:37:45.281170  752707 retry.go:31] will retry after 31.429679ms: open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/scheduled-stop-862039/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-862039 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-862039 -n scheduled-stop-862039
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-862039
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-862039 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-862039
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-862039: exit status 7 (66.147293ms)

                                                
                                                
-- stdout --
	scheduled-stop-862039
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-862039 -n scheduled-stop-862039
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-862039 -n scheduled-stop-862039: exit status 7 (65.510165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-862039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-862039
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-862039: (4.070296099s)
--- PASS: TestScheduledStopUnix (95.67s)

                                                
                                    
x
+
TestInsufficientStorage (9.12s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-018570 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-018570 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (6.777587237s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"226bda00-de01-4549-8f2d-b2b18bf3cb3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-018570] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a720e36-4846-4e13-9b3d-1e91b6b3c2f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21550"}}
	{"specversion":"1.0","id":"051160f6-2d4c-49a8-84f8-2b1120e75ddc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"acf7a4c1-f9d3-4352-a371-654c8f4d5da2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig"}}
	{"specversion":"1.0","id":"24864610-3373-4f33-8e07-899fe4094415","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube"}}
	{"specversion":"1.0","id":"0f289a75-d828-4bbb-b63a-1b383f8eb7ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a5801778-75c7-4945-a7dc-c674450d1ef3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7fc27f47-2a6e-4029-838f-ed00ce1a236a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a73d06d3-1b76-4cbb-bf8f-baa1852a1edf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"aee86eb4-0981-4dce-8bc6-35c3784bcfcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"edc1ca86-8824-4999-b16c-13f97948e8b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"3be2bbed-cddf-4b2c-af36-1f76daedb821","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-018570\" primary control-plane node in \"insufficient-storage-018570\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"795a8dfd-4b7c-4a9d-a97a-dd5f8b4030e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"43fd4b75-c175-4c7f-8d5e-9d2b4f349422","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"be13ad4d-7c42-4491-97e1-c5b1f450863d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-018570 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-018570 --output=json --layout=cluster: exit status 7 (260.595872ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-018570","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-018570","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:39:07.230087  945647 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-018570" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-018570 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-018570 --output=json --layout=cluster: exit status 7 (260.846868ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-018570","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-018570","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:39:07.492033  945752 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-018570" does not appear in /home/jenkins/minikube-integration/21550-749120/kubeconfig
	E0917 00:39:07.502335  945752 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/insufficient-storage-018570/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-018570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-018570
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-018570: (1.824206566s)
--- PASS: TestInsufficientStorage (9.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (45.42s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1103819145 start -p running-upgrade-243045 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1103819145 start -p running-upgrade-243045 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (19.881702838s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-243045 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-243045 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.985422892s)
helpers_test.go:175: Cleaning up "running-upgrade-243045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-243045
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-243045: (1.941810916s)
--- PASS: TestRunningBinaryUpgrade (45.42s)

                                                
                                    
x
+
TestKubernetesUpgrade (313.3s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-898074 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0917 00:40:37.159820  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-898074 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.09533508s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-898074
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-898074: (1.254206531s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-898074 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-898074 status --format={{.Host}}: exit status 7 (83.316789ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-898074 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-898074 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m34.871162654s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-898074 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-898074 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-898074 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (63.583366ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-898074] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-898074
	    minikube start -p kubernetes-upgrade-898074 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8980742 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-898074 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-898074 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-898074 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.839946686s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-898074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-898074
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-898074: (2.035180084s)
--- PASS: TestKubernetesUpgrade (313.30s)

                                                
                                    
x
+
TestMissingContainerUpgrade (82.8s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2550558311 start -p missing-upgrade-178349 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2550558311 start -p missing-upgrade-178349 --memory=3072 --driver=docker  --container-runtime=containerd: (27.950551763s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-178349
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-178349
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-178349 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-178349 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (49.158567349s)
helpers_test.go:175: Cleaning up "missing-upgrade-178349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-178349
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-178349: (2.476695355s)
--- PASS: TestMissingContainerUpgrade (82.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-367805 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-367805 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (70.243835ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-367805] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-367805 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
I0917 00:39:09.602630  752707 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4093085235/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80] Decompressors:map[bz2:0xc000122cc0 gz:0xc000122cc8 tar:0xc000122c70 tar.bz2:0xc000122c80 tar.gz:0xc000122c90 tar.xz:0xc000122ca0 tar.zst:0xc000122cb0 tbz2:0xc000122c80 tgz:0xc000122c90 txz:0xc000122ca0 tzst:0xc000122cb0 xz:0xc000122cd0 zip:0xc000122ce0 zst:0xc000122cd8] Getters:map[file:0xc001844690 http:0xc0017ce370 https:0xc0017ce460] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0917 00:39:09.602687  752707 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4093085235/001/docker-machine-driver-kvm2
I0917 00:39:10.829239  752707 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0917 00:39:10.829320  752707 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0917 00:39:10.862106  752707 install.go:137] /home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0917 00:39:10.862143  752707 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0917 00:39:10.862242  752707 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0917 00:39:10.862284  752707 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4093085235/002/docker-machine-driver-kvm2
I0917 00:39:10.892210  752707 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4093085235/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80] Decompressors:map[bz2:0xc000122cc0 gz:0xc000122cc8 tar:0xc000122c70 tar.bz2:0xc000122c80 tar.gz:0xc000122c90 tar.xz:0xc000122ca0 tar.zst:0xc000122cb0 tbz2:0xc000122c80 tgz:0xc000122c90 txz:0xc000122ca0 tzst:0xc000122cb0 xz:0xc000122cd0 zip:0xc000122ce0 zst:0xc000122cd8] Getters:map[file:0xc0018442d0 http:0xc0017ce5a0 https:0xc0017ce5f0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0917 00:39:10.892266  752707 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4093085235/002/docker-machine-driver-kvm2
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-367805 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (25.340370877s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-367805 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-284174 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-284174 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (190.660907ms)

                                                
                                                
-- stdout --
	* [false-284174] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:39:16.453287  948073 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:39:16.453635  948073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:39:16.453647  948073 out.go:374] Setting ErrFile to fd 2...
	I0917 00:39:16.453653  948073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:39:16.453947  948073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-749120/.minikube/bin
	I0917 00:39:16.454455  948073 out.go:368] Setting JSON to false
	I0917 00:39:16.455778  948073 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":12098,"bootTime":1758057458,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:39:16.455872  948073 start.go:140] virtualization: kvm guest
	I0917 00:39:16.458956  948073 out.go:179] * [false-284174] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:39:16.460833  948073 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:39:16.460863  948073 notify.go:220] Checking for updates...
	I0917 00:39:16.464772  948073 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:39:16.465924  948073 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-749120/kubeconfig
	I0917 00:39:16.467011  948073 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-749120/.minikube
	I0917 00:39:16.469152  948073 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:39:16.470373  948073 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:39:16.471964  948073 config.go:182] Loaded profile config "NoKubernetes-367805": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:39:16.472113  948073 config.go:182] Loaded profile config "force-systemd-env-059670": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:39:16.472244  948073 config.go:182] Loaded profile config "offline-containerd-300446": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0917 00:39:16.472369  948073 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:39:16.501499  948073 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:39:16.501612  948073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:39:16.585016  948073 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:84 SystemTime:2025-09-17 00:39:16.569553774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:39:16.585123  948073 docker.go:318] overlay module found
	I0917 00:39:16.586658  948073 out.go:179] * Using the docker driver based on user configuration
	I0917 00:39:16.587686  948073 start.go:304] selected driver: docker
	I0917 00:39:16.587703  948073 start.go:918] validating driver "docker" against <nil>
	I0917 00:39:16.587717  948073 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:39:16.589325  948073 out.go:203] 
	W0917 00:39:16.590367  948073 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0917 00:39:16.591324  948073 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-284174 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-284174

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-284174

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-284174

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-284174

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-284174

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-284174

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-284174

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-284174

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-284174

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-284174

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-284174

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-284174" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-284174" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-284174

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-284174"

                                                
                                                
----------------------- debugLogs end: false-284174 [took: 5.001686802s] --------------------------------
helpers_test.go:175: Cleaning up "false-284174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-284174
--- PASS: TestNetworkPlugins/group/false (5.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-367805 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-367805 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (16.228255906s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-367805 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-367805 status -o json: exit status 2 (291.16181ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-367805","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-367805
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-367805: (1.962721848s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-367805 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-367805 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.078260359s)
--- PASS: TestNoKubernetes/serial/Start (7.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-367805 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-367805 "sudo systemctl is-active --quiet service kubelet": exit status 1 (319.216576ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (3.357435828s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (78.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.805124802 start -p stopped-upgrade-762953 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.805124802 start -p stopped-upgrade-762953 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (47.220992084s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.805124802 -p stopped-upgrade-762953 stop
E0917 00:40:49.958995  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.805124802 -p stopped-upgrade-762953 stop: (1.781868133s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-762953 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-762953 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (29.671941287s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (78.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-367805
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-367805: (1.553126033s)
--- PASS: TestNoKubernetes/serial/Stop (1.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-367805 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-367805 --driver=docker  --container-runtime=containerd: (7.084380189s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-367805 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-367805 "sudo systemctl is-active --quiet service kubelet": exit status 1 (316.38737ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-762953
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-762953: (1.211695056s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                    
x
+
TestPause/serial/Start (94.42s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-593182 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-593182 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m34.419856831s)
--- PASS: TestPause/serial/Start (94.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (118.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-284174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-284174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m58.652903822s)
--- PASS: TestNetworkPlugins/group/auto/Start (118.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.82s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-593182 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-593182 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.804327986s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (45.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-284174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-284174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (45.614659801s)
--- PASS: TestNetworkPlugins/group/calico/Start (45.61s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-593182 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-593182 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-593182 --output=json --layout=cluster: exit status 2 (302.595507ms)

                                                
                                                
-- stdout --
	{"Name":"pause-593182","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-593182","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-593182 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.69s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-593182 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.69s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.63s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-593182 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-593182 --alsologtostderr -v=5: (2.627678593s)
--- PASS: TestPause/serial/DeletePaused (2.63s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.92s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.861099596s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-593182
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-593182: exit status 1 (16.793859ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-593182: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (137.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-284174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-284174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (2m17.657216556s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (137.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-bnxr6" [3dd955b1-ae4b-4e55-8d62-c113f546cbc9] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003986003s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-284174 "pgrep -a kubelet"
I0917 00:44:07.901995  752707 config.go:182] Loaded profile config "calico-284174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-284174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6vjz8" [992089a8-af65-44cc-b4a7-a80e72d29725] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6vjz8" [992089a8-af65-44cc-b4a7-a80e72d29725] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003885978s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-284174 "pgrep -a kubelet"
I0917 00:44:08.807347  752707 config.go:182] Loaded profile config "auto-284174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-284174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-skxsl" [d5c6683a-1abe-4412-ae06-95b8b1d4ec54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-skxsl" [d5c6683a-1abe-4412-ae06-95b8b1d4ec54] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.002801885s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-284174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-284174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-284174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-284174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-284174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-284174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (118.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-284174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-284174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m58.300668854s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (118.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (198.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-284174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0917 00:45:20.238056  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-284174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (3m18.635339935s)
--- PASS: TestNetworkPlugins/group/flannel/Start (198.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (128.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-284174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0917 00:45:37.159582  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/addons-346612/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:45:49.958582  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/functional-695580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-284174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (2m8.582113035s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (128.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-284174 "pgrep -a kubelet"
I0917 00:45:59.349347  752707 config.go:182] Loaded profile config "custom-flannel-284174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-284174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fdzb6" [1175e492-1c4c-4a59-ae01-5a640f605554] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fdzb6" [1175e492-1c4c-4a59-ae01-5a640f605554] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003604552s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-284174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-284174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-284174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-284174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-284174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m9.159403295s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-pt9gz" [34ad2de1-4f8d-4d2d-8148-8de990f31413] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004310129s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-284174 "pgrep -a kubelet"
I0917 00:46:41.761464  752707 config.go:182] Loaded profile config "kindnet-284174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-284174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fwt8m" [d1892dfe-3484-4efc-98f5-b397e32b8444] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fwt8m" [d1892dfe-3484-4efc-98f5-b397e32b8444] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004177442s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-284174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-284174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-284174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (114.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-099552 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-099552 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m54.951335651s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (114.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-284174 "pgrep -a kubelet"
I0917 00:47:36.666568  752707 config.go:182] Loaded profile config "bridge-284174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-284174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wnpvj" [0a6a58b9-6b0a-4040-b33c-fb0203f17b6a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wnpvj" [0a6a58b9-6b0a-4040-b33c-fb0203f17b6a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.003297143s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-284174 "pgrep -a kubelet"
I0917 00:47:44.241011  752707 config.go:182] Loaded profile config "enable-default-cni-284174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-284174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zrsmw" [9c4ffb0f-2abd-46af-8545-f2250a68688e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zrsmw" [9c4ffb0f-2abd-46af-8545-f2250a68688e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.003654766s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-284174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-284174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-284174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-284174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-284174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-284174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-vnd4m" [66b50cc0-0da4-4f46-bf46-78e60fd6e40d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003260911s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-284174 "pgrep -a kubelet"
I0917 00:48:02.279470  752707 config.go:182] Loaded profile config "flannel-284174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-284174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tb7vx" [c62ef6bd-0596-4e4b-a7e8-587a822e8ce5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tb7vx" [c62ef6bd-0596-4e4b-a7e8-587a822e8ce5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004317612s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (121.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-305343 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-305343 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (2m1.401490708s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (121.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-284174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-284174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-284174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-011954 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-011954 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (43.923815819s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-895748 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-895748 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (45.414029242s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-011954 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3fb5742a-f911-4ccc-82bf-170cf17c2ccc] Pending
helpers_test.go:352: "busybox" [3fb5742a-f911-4ccc-82bf-170cf17c2ccc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3fb5742a-f911-4ccc-82bf-170cf17c2ccc] Running
E0917 00:49:01.595243  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/calico-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:01.601591  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/calico-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:01.612925  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/calico-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:01.634247  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/calico-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:01.675545  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/calico-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:01.756927  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/calico-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:01.918399  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/calico-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:02.239986  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/calico-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:02.881640  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/calico-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003850936s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-011954 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-099552 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b1886520-514f-4466-82bb-67601063a23d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0917 00:49:04.163167  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/calico-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [b1886520-514f-4466-82bb-67601063a23d] Running
E0917 00:49:09.026288  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/auto-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:09.032697  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/auto-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:09.044052  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/auto-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:09.065373  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/auto-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:09.106774  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/auto-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:09.188202  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/auto-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:09.349641  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/auto-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:09.671610  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/auto-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:10.313075  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/auto-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:11.594572  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/auto-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:11.846684  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/calico-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004717485s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-099552 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-011954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-011954 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-011954 --alsologtostderr -v=3
E0917 00:49:06.725205  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/calico-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-011954 --alsologtostderr -v=3: (12.082085674s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-099552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-099552 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-099552 --alsologtostderr -v=3
E0917 00:49:14.156067  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/auto-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-099552 --alsologtostderr -v=3: (12.041891451s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-011954 -n default-k8s-diff-port-011954
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-011954 -n default-k8s-diff-port-011954: exit status 7 (81.188574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-011954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-895748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-011954 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0917 00:49:19.277908  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/auto-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-011954 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (50.770854233s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-011954 -n default-k8s-diff-port-011954
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-895748 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-895748 --alsologtostderr -v=3: (1.247866396s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-895748 -n newest-cni-895748
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-895748 -n newest-cni-895748: exit status 7 (71.796349ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-895748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-895748 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0917 00:49:22.088197  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/calico-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-895748 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (15.113088164s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-895748 -n newest-cni-895748
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-099552 -n old-k8s-version-099552
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-099552 -n old-k8s-version-099552: exit status 7 (86.276329ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-099552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-895748 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-895748 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-895748 -n newest-cni-895748
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-895748 -n newest-cni-895748: exit status 2 (316.00621ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-895748 -n newest-cni-895748
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-895748 -n newest-cni-895748: exit status 2 (321.696522ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-895748 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-895748 -n newest-cni-895748
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-895748 -n newest-cni-895748
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-656365 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0917 00:49:42.570343  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/calico-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:49:50.001079  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/auto-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-656365 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (1m20.820703584s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-305343 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [16a5dec3-de48-4ca1-8db4-2513a38056a1] Pending
helpers_test.go:352: "busybox" [16a5dec3-de48-4ca1-8db4-2513a38056a1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [16a5dec3-de48-4ca1-8db4-2513a38056a1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003444574s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-305343 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hwhcr" [c9a25886-2a16-4179-a81e-d67b5fe63833] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003869115s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-305343 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-305343 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-305343 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-305343 --alsologtostderr -v=3: (11.982417848s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hwhcr" [c9a25886-2a16-4179-a81e-d67b5fe63833] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003638745s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-011954 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-011954 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-011954 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-011954 -n default-k8s-diff-port-011954
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-011954 -n default-k8s-diff-port-011954: exit status 2 (288.210077ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-011954 -n default-k8s-diff-port-011954
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-011954 -n default-k8s-diff-port-011954: exit status 2 (286.38046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-011954 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-011954 -n default-k8s-diff-port-011954
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-011954 -n default-k8s-diff-port-011954
E0917 00:50:23.532565  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/calico-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-305343 -n no-preload-305343
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-305343 -n no-preload-305343: exit status 7 (70.809552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-305343 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (44.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-305343 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-305343 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (44.224753977s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-305343 -n no-preload-305343
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (44.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-656365 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3fee8b45-a0bb-4c40-a360-cf8a1ab26cfc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0917 00:51:04.692398  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/custom-flannel-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [3fee8b45-a0bb-4c40-a360-cf8a1ab26cfc] Running
E0917 00:51:09.813923  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/custom-flannel-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003178118s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-656365 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f67pj" [201fd3cc-74a0-4f99-aa9d-0a75b24eeb97] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003411596s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-656365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-656365 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-656365 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-656365 --alsologtostderr -v=3: (11.934128411s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f67pj" [201fd3cc-74a0-4f99-aa9d-0a75b24eeb97] Running
E0917 00:51:20.055303  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/custom-flannel-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003073247s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-305343 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-305343 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-305343 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-305343 -n no-preload-305343
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-305343 -n no-preload-305343: exit status 2 (291.618926ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-305343 -n no-preload-305343
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-305343 -n no-preload-305343: exit status 2 (300.162046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-305343 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-305343 -n no-preload-305343
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-305343 -n no-preload-305343
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-656365 -n embed-certs-656365
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-656365 -n embed-certs-656365: exit status 7 (76.482088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-656365 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (43.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-656365 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-656365 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (42.839902827s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-656365 -n embed-certs-656365
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (43.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nd2g8" [bb1142c9-31d2-4daf-b63c-6697d9af4a53] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004107505s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nd2g8" [bb1142c9-31d2-4daf-b63c-6697d9af4a53] Running
E0917 00:52:16.456367  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/kindnet-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003658564s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-656365 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-656365 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-656365 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-656365 -n embed-certs-656365
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-656365 -n embed-certs-656365: exit status 2 (285.043914ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-656365 -n embed-certs-656365
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-656365 -n embed-certs-656365: exit status 2 (283.38462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-656365 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-656365 -n embed-certs-656365
E0917 00:52:21.499089  752707 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-749120/.minikube/profiles/custom-flannel-284174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-656365 -n embed-certs-656365
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.63s)

                                                
                                    

Test skip (25/329)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (7.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
I0917 00:39:09.331839  752707 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0917 00:39:09.331990  752707 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0917 00:39:09.368483  752707 install.go:62] docker-machine-driver-kvm2: exit status 1
W0917 00:39:09.368637  752707 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0917 00:39:09.368696  752707 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4093085235/001/docker-machine-driver-kvm2
panic.go:636: 
----------------------- debugLogs start: kubenet-284174 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-284174

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-284174

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-284174

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-284174

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-284174

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-284174

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-284174

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-284174

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-284174

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-284174

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-284174

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-284174" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-284174" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-284174

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-284174"

                                                
                                                
----------------------- debugLogs end: kubenet-284174 [took: 6.512854047s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-284174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-284174
--- SKIP: TestNetworkPlugins/group/kubenet (7.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-284174 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-284174

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-284174

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-284174

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-284174

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-284174

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-284174

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-284174

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-284174

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-284174

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-284174

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-284174

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-284174" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-284174

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-284174

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-284174

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-284174

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-284174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-284174" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-284174

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-284174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-284174"

                                                
                                                
----------------------- debugLogs end: cilium-284174 [took: 3.384643729s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-284174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-284174
--- SKIP: TestNetworkPlugins/group/cilium (3.54s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-908870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-908870
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard